Project description
AI-supported recommendation and classification systems, especially those based on data-intensive machine learning methods, are becoming increasingly important across a range of applications. These include various applications in the medical field – from diagnostic support to therapy recommendations. Especially in critical areas such as medicine, human control and supervision are among the key requirements for trustworthy AI systems. In order for human experts to be able to assess and evaluate AI-generated recommendations and classifications, it must be possible to trace the information on which an AI system arrives at a particular output. Hybrid human-AI teams offer the opportunity to make better decisions in partnership than humans or AI systems alone. For this to succeed, AI systems must be designed so that people have some degree of trust in them, on the one hand, and also critically reflect on their output, on the other.
In the BMBF joint project Ethyde, which is being carried out jointly by bidt and the Chair of Business and Social Ethics at the University of Hohenheim (Matthias Uhl), conditions for the trustworthy design of hybrid human-AI teams are to be empirically analysed and prototypically implemented. The focus of the Chair of Business and Social Ethics is on behavioural economic experiments and the derivation of concrete recommendations for the ethically compliant design of human-AI system interactions. At bidt, the focus is on methods of explainable artificial intelligence and interactive machine learning. In concrete terms, specific methods of explainable AI – in particular, feature relevance, concept-based explanations and example-based explanations – are to be combined with interactive methods for correcting system outputs and explanations and implemented as prototypes.
The project aims to empirically investigate, through various cognitive psychology experiments, which factors can be used to calibrate users’ trust in AI-supported recommendation and classification systems, thereby improving the results achieved jointly by humans and the system. Of particular interest in this context is the significance of different methods of explainable AI, as well as other interaction and correction options, and how they can be used specifically to design trustworthy AI systems. In a prototype, various methods for realising partnership-based AI systems are to be implemented as examples for the field of medical diagnostics. This is to be realised as a web application and made available as a demonstrator to a broad circle of interested parties.
With funding from the:

Project team
Prof. Dr. Ute Schmid
Member of bidt's Board of Directors and the Executive Commitee | Member of the Bavarian AI Council | Head of Cognitive Systems Group, University of Bamberg

