| Research Projects | Promoted | Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI (ReGInA)
bidt background

Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI (ReGInA)

The research project investigated the potential vulnerabilities of relying on machines in medical decision-making. It evaluated the adequate level of trust for physicians to benefit from the use of AI-based recommender systems during the interpretation of medical images for diagnosis and research human-centric AI system designs that reduce inducing biases into the medical decision process.

Projct description

The project team investigated the potential vulnerabilities of relying on machines when making medical decisions. It concentrated on the interaction between physicians and AI-based recommender systems during the interpretation of medical images for diagnosis. The adequate level of trust for human decision-makers was evaluated to benefit from the use of AI advisors and to reach well informed professional judgements. Furthermore, the project team studied the causal effects of institutional, situational, individual, and technological parameters on this trust level.

The informative value of the findings are significant for an abundant number of AI applications in other fields and add a new layer to the public debate on AI advisory systems. In line with the requirements of human-centric design principles, the scientists put the physicians themselves, as well as the physician-patient relationship, at the centre of their research. In addition, research design paradigms for AI advisory systems that allow for a calibration of trust, were considered.

While structures and processes in the medical domain are increasingly adapted to machines, the research focussed on the question to what extent recommender systems can be integrated into the existing structure of accountability and responsibility in the medical practice instead. In doing so, the project team complemented its approach with the perspective of organisational ethics. They hence broadened the scholarly debate about the use of recommender systems in the medical domain.

The project was completed by 31 December, 2024.

Contact

Dr. Christoph Egle

Managing Director, bidt

Project team

Prof. Dr. Matthias Uhl

Professorship of Societal Implications and Ethical Aspects of Artificial Intelligence, Ingolstadt University of Applied Sciences

Prof. Dr. Alexis Fritz

Chair of Moral Theology, Catholic University of Eichstätt-Ingolstadt

Prof. Dr.-Ing. Marc Aubreville

Professorship for Image Understanding and Medical Application of Artificial Intelligence, Ingolstadt University of Applied Sciences

Dr. Florian Richter

Professorship of Societal Implications and Ethical Aspects of Artificial Intelligence, Ingolstadt University of Applied Sciences

Dr. Sebastian Krügel

Research Associate, Chair for Societal Implications and Ethical Aspects of AI | Ingolstadt University of Applied Sciences

Angelika Kießig

Research Assistent, Chair for Moral Theology | Catholic University of Eichstätt-Ingolstadt

Wiebke Brandt

Research Assistent, Chair for Moral Theology | Catholic University of Eichstätt-Ingolstadt

Jonas Ammeling

Research Assistant, AImotion Institute | Ingolstadt University of Applied Sciences

Emely Rosbach

Research Assistant, AImotion Institute | Ingolstadt University of Applied Sciences