The Ambivalence of Trust in AI
In this project, funded by the Bavarian Research Institute for Digital Transformation (BIDT), we investigate the potential vulnerabilities of relying on machines in medical decision-making. We will evaluate the adequate level of trust for physicians to benefit from the use of AI-based recommender systems during the interpretation of medical images for diagnosis and research human-centric AI system designs that reduce inducing biases into the medical decision process.
Our consortium investigates the potential vulnerabilities of relying on machines. We will concentrate on the interaction between physicians and AI-based recommender systems during the interpretation of medical images for diagnosis. More specifically, we want to investigate a potential overtrust in AI systems and methodologies to reduce biases.
The interdisciplinary research network for our project integrates the forces of experimental psychology, behavioral economics, normative ethics and the engineering disciplines of machine learning applied to the medical domains of digital pathology and radiology. Prof. Dr. Matthias Uhl is an experimental economist and behavioral ethicist who heads the bidt junior research group “Ethics of Digitization” that applies methods from the experimental social sciences to study the behavioral implications of human-machine interactions. Prof. Dr. Marc Aubreville is a computer scientist who works on improving medical decisions through modern machine-learning techniques, especially in the context of RS. He will expand on his experience in investigating the impact of AI-powered advice on physicians’ decisions. Prof. Dr. Alexis Fritz employs a normative approach focused on ethical criteria for AI systems. His research focuses on concepts of moral agency, trust, and autonomy as well as information-technological challenges in the health sector. Coming from different disciplines, all consortium partners have already collaborated in the past as members of the Artificial Intelligence Network Ingolstadt gGmbH (AININ) which seeks to support the public discourse on the social effects of AI technologies.
Main technical work packages:
The technical work packages are conducted in close collaboration with the colleagues from behavioral ethics.
|WP1: Baseline recommender system development
|Construction of different AI recommender systems (e.g., based on pure classification of a medical condition, based on explainable AI-methods or based on inference of medical subtasks such as segmentation) for tasks in radiology and pathology. These system need to also comprise the option to be manipulated for experiments with subjects to investigate overtrust in current AI recommender systems and the consequences of it.
|WP2: Development of a modular framework for testing the interaction between humans and AI-based RS
|Development of a modular framework based on Django as a novel front-end of the EXACT collaborative online tool. Incorporation of server-side deep learning predictions into the framework.
|Calibration of model certainty / out-of-distribution detection
Derivation of novel architectural approaches for out-of-distributon detection and physician error likelihood prediction to be implemented with standard deep learning frameworks (e.g., pyTorch). The outcome of this work packages is subsequently also evaluated in trials.