The research project investigates the potential vulnerabilities of relying on machines in medical decision-making. It will evaluate the adequate level of trust for physicians to benefit from the use of AI-based recommender systems during the interpretation of medical images for diagnosis and research human-centric AI system designs that reduce inducing biases into the medical decision process.
The project team investigates the potential vulnerabilities of relying on machines when making medical decisions. It will concentrate on the interaction between physicians and AI-based recommender systems during the interpretation of medical images for diagnosis. The adequate level of trust for human decision-makers will be evaluated to benefit from the use of AI advisors and to reach well informed professional judgements. Furthermore, the project team will study the causal effects of institutional, situational, individual, and technological parameters on this trust level. The informative value of the findings will be significant for an abundant number of AI applications in other fields and add a new layer to the public debate on AI advisory systems. In line with the requirements of human-centric design principles, the scientists will put the physicians themselves, as well as the physician-patient relationship, at the centre of their research. In addition, research design paradigms for AI advisory systems that allow for a calibration of trust, will be considered. While structures and processes in the medical domain are increasingly adapted to machines, they focus on the question to what extent recommender systems can be integrated into the existing structure of accountability and responsibility in the medical practice instead. In doing so, the project team will complement its approach with the perspective of organisational ethics.They hence broaden the scholarly debate about the use of recommender systems in the medical domain.