By Mag. Sara Oran
In her essay "The Autonomous Human in the Age of Digital Change" Prof. Dr. Sabine Theresia Köszegi, deals on the one hand with the question whether Artificial Intelligence (AI) systems are able to make impartial decisions, on the other hand she shows in which way they limit the autonomy of humans and which aspects can contribute to a positive development of the systems in the future in order to protect humans and society.
Can AI systems make impartial decisions and who is responsible?
Can AI systems make impartial decisions and who is responsible?
AI systems are finding application in a variety of fields. For example, automated decision-making systems are being developed to take decisions away from humans because they make them supposedly more efficiently and without the influence of bias. They are supposed to help doctors in diagnosing patients or managers in recruiting, but also states and public institutions use such automated decision systems. Köszegi gives the search for new employees for a company as an illustrative example of their use. Can the automated decision-making system select suitable candidates without prejudice and without discriminating against people?
The idea that AI systems are purely objective computing machines ignores an essential aspect: significant aspects of the system, such as the definition of the decision problem, the selection of relevant data and determining parameters, come from humans and thus cultural, social and political values flow into the system. Köszegi therefore sees them as sociotechnical systems that are susceptible to the same biased decisions as humans.
In this context, Köszegi also raises the issue of responsibility for (mis)decisions, as the delegation of decisions to automated systems increasingly limits the perceived control and personal responsibility of humans.
People who use AI systems tend to have great confidence in the technology, questioning their own judgment in the process. According to a study that explored this using facial recognition software, it even goes so far as to make people adhere to even obviously wrong results of the software.
How autonomous can we make our decisions?
These technologies also find their way into our private lives by preselecting products, services, partners, music, and information and news that might be of interest to us based on our preferences and previous decisions. Thus, on one hand, we do not even know which options are withheld from us and, on the other hand, we are usually not aware of the fact that AI systems are being used.
Through so-called "profiling", based on personal data of users, companies get the opportunity to directly influence the behavior and mood of users with the help of manipulative tactics, and can thus, for example, enforce their own economic interests.
The lack of transparency and the manipulative tactics used show that the autonomy of users in decision-making as well as their individual freedom and self-determination are being restricted.
Where do we go from here?
According to Köszegi, we are currently at a significant stage where we can still influence the design of AI systems. Ethical guidelines should help to ensure a technology that takes into account the well-being of society and the environment in which we live.
The ethics guidelines of the European Commission's Expert Group on Artificial Intelligence, to which Köszegi was a coauthor, call for a human-centered technology that protects and respects people's rights.
The expert group developed 7 requirements: trustworthy AI systems should preserve the quality and integrity of data and the protection of privacy. They should ensure technical robustness and security, transparency, the fundamental right of autonomy and self-determination, accountability, diversity, as well as non-discrimination and fairness, and have social and environmental well-being in mind.
Since the safeguarding of our fundamental right to freedom and self-determination has not yet been discussed much, Köszegi sees a great need for education about how automated decision-making systems work.
Conclusion:
AI systems are not, as is often assumed, purely objective computational systems - automated decision systems make the same discriminatory decisions as humans because they are based on human-selected data, parameters, and goals. Through them, the autonomy of humans can be limited, and for both of the above reasons, they should be designed carefully in their respective application contexts, with ethics guidelines in mind.
The essay appeared in: Markus Hengstschläger/Rat für Forschung und Technologieentwicklung (Hg.) (2020): Digitaler Wandel und Ethik. Salzburg, München: Ecowin Verlag. S. 62-90
Sabine Theresia Köszegi is a professor at the Institute of Management Science at TU Wien, where she heads the Department of Labor Science & Organization; she is also the academic director of the MBA Innovation, Digitalization & Entrepreneurship at the Continuing Education Center (CEC) at TU Wien.
Information on the MBA Innovation, Digitalization & Entrepreneurship, opens an external URL in a new window and the Management & Leadership MBA programs, opens an external URL in a new window at TU Wien can be found here, opens an external URL in a new window. The application for the start of the course in the winter term 2021/22 is possible until September 26, 2021.