On June 28, 2021, Prof. Sabine T. Köszegi and Prof. Michael Filzmoser from TU Wien, discussed major challenges and opportunities associated with the implementation of AI systems into social systems as well as policy recommendations to address risks appropriately.
Recent advances in Artificial Intelligence (AI) revealed its capacity as a general-purpose technology and pushed inventions in areas of mobility, healthcare, home & service robotics, education, cyber security and many more. AI-enabled systems and applications have the capability to generate tremendous benefits for individuals and the society as a whole. At the same time, the technology comes with risks and challenges associated to fundamental human rights and ethics. This brings policy makers and experts into the arena who are concerned with the development of ethics guidelines and policies to ensure that AI implementations comply with fundamental human rights and our societal values.
We have summarized the key learnings from the discussion for you:
- Illusion & overtrust: AI systems create the illusion of objectivity and lack accountability and therefore create overtrust. They also allow for social engineering and tuning such as the Social Scoring Systems in China. Thus, making AI a possible threat to human autonomy.
- Decisions, decisions: Decision-making algorithms still raise the same questions as human based decision making like transparency, discrimination, accountability, etc.
- Ironies of automation (by Lisanne Bainbridge): We replace human operators with systems but keep human operators for monitoring and intervention in case of any failure or problem.
- „AI is neither intelligent nor artificial“ (Kate Crawford): AI is made by human programmers – meaning on the one hand, if the input data is flawed so will be the output data. On the other hand, since people programmed the AI, we can actively influence the AI for better decision support and automation.
- The High-Level Expert Group on Artificial Intelligence (AI HLEG) appointed by the European Commission, created an ethics guideline for trustworthy AI based on human rights:
- Respect for human autonomy
- Prevention of harm
Artificial intelligence shifts human’s roles from decision makers to mediators between the clients and the system. With that said, there is nothing artificial about AI. – AI systems are created by people, meaning that, at the end of the day, people control Artificial Intelligence.
Deepen your knowledge
Our technology-driven world opens up unprecedented opportunities, but at the same time poses special challenges for managers and organizations due to its fast pace and complexity. Leaders therefore need special skills to operate successfully in this context. This is precisely where our MBA and compact programs in Management & Leadership come in. All details can be found here.
Invitation to our events
At our events, you can learn about the programs, meet experts in their field, and get exciting insights about current challenges, best practices, and developments. We look forward to your participation
About the Experts
Sabine Köszegi has been a professor at the Institute for Management Sciences at TU Wien since 2009, where she heads the Department of Labor Science and Organization. She heads the interdisciplinary Doctoral College (DC) on Trust in Robots – Trusting Robots and is academic director of the MBA program Innovation, Digitalization & Entrepreneurship.
Michael Filzmoser is an associate professor at the Department for Labor Science and Organization of the Institute for Management Science at TU Wien. Furthermore, he acts as vice dean of Study Affairs for Mechanical Engineering and Mechanical Engineering - Management. In addition he is the academic director of the MBA program Digital Transformation & Change Management.