Explainable Artificial Intelligence for Clinical Decision Support Systems: A Cognitive Task Analysis
Guest Speaker: Scott Thiebes (Karlsruhe Institute of Technology)
Date & Time: 10:00-11:30 (Beijing Time), Wed. 11th, Dec. 2024
Classroom: Room 2101, Tongji Building A
ABSTRACT
The integration of artificial intelligence (AI) into clinical decision support systems (DSS) has the potential to enhance diagnostic accuracy and efficiency. However, the complexity and opacity of many contemporary AI models present significant challenges for clinical adoption, as they hinder medical experts' ability to evaluate the reliability and correctness of AI-based diagnoses and treatment recommendations. Explainable AI (XAI) aims to address this issue by providing more interpretable models without sacrificing predictive performance, offering transparency through post-hoc explanations such as heatmaps or explanations by example. Yet, how medical experts' decision-making is impacted by these explanations for AI-based recommendations remains largely unexplored. In this study, we investigate how XAI methods affect medical experts' decision[1]making in the context of classifying central nervous system tumors. Using cognitive task analysis, we observed 15 neuropathologists from three university hospitals in Germany as they interacted with a clinical decision support system incorporating various AI-based recommendations and explanations. By examining their cognitive processes through think-aloud protocols, interviews, and physiological data tracking, we uncovered patterns in how XAI influences clinical decision[1]making. Our research contributes to research by presenting a cognitive thought process model that adds to emerging research on joint human-AI decision-making.
Keywords: Explainable Artificial Intelligence, Clinical Decision Support Systems, Cognitive Thought Process, Cognitive Task Analysis