| Research Projects | Internal | Ethical implications of hybrid teams of humans and artificial intelligence systems (Ethyde)
bidt background

Ethical implications of hybrid teams of humans and artificial intelligence systems (Ethyde)

The project investigates conditions for the design of trustworthy AI systems in cognitive psychology experiments and implements these prototypically as demonstrators. The focus is on methods of explanatory interactive learning and their application for human-AI teams in diagnostic decisions in medicine.

Project description

AI-supported recommendation and classification systems, especially those based on data-intensive machine learning methods, are becoming increasingly important in various fields of application. These include various applications in the medical field – from diagnostic support to therapy recommendations. Especially in critical areas such as medicine, human control and supervision are among the key requirements for trustworthy AI systems. In order for human experts to be able to assess and evaluate AI-generated recommendations and classifications, it must be possible to trace the information on which an AI system arrives at a particular output. Hybrid human-AI teams offer the opportunity for better decisions to be made in partnership than by humans or AI systems alone. For this to succeed, AI systems must be designed in such a way that people have a certain amount of trust in them on the one hand, but also critically reflect on the output of these systems on the other.

In the BMBF joint project Ethyde, which is being carried out jointly by bidt and the Chair of Business and Social Ethics at the University of Hohenheim (Matthias Uhl), conditions for the trustworthy design of hybrid human-AI teams are to be empirically analysed and prototypically implemented. The focus of the Chair of Business and Social Ethics is on behavioural economic experiments and the derivation of concrete recommendations for an ethically compliant design of the interaction between humans and AI systems. At bidt, the focus is on methods of explainable artificial intelligence and interactive machine learning. In concrete terms, specific methods of explainable AI – in particular feature relevance, concept-based explanations and example-based explanations – are to be combined with interactive methods for correcting system outputs and explanations and implemented as prototypes.

The aim of the project is to empirically investigate in various cognitive psychology experiments which factors can be used to calibrate the trust of users in AI-supported recommendation and classification systems in such a way that the results achieved jointly by humans and the system are improved. Of particular interest in this context is the significance of different methods of explainable AI as well as different interaction and correction options and how they can be used specifically for the design of trustworthy AI systems. In a prototype, various methods for realising partnership-based AI systems are to be implemented as examples for the field of medical diagnostics. This is to be realised as a web application and made available as a demonstrator to a broad circle of interested parties.

Funded by

Project team

Prof. Dr. Ute Schmid

Member of bidt's Board of Directors and the Executive Commitee | Member of the Bavarian AI Council | Head of Cognitive Systems Group, University of Bamberg

Eda Ismail-Tsaous

Researcher, bidt

Celine Spannagl

Researcher, bidt