透视图(图形)
计算机科学
领域(数学分析)
关系(数据库)
领域(数学)
数据科学
精确性和召回率
专家系统
召回
人工智能
知识管理
数据挖掘
心理学
认知心理学
数学
数学分析
纯数学
作者
Fábio Luiz D. Morais,Ana Cristina Bicharra García,Paulo Sérgio Medeiros dos Santos,Luiz Alberto Pereira Afonso Ribeiro
标识
DOI:10.1109/cscwd57460.2023.10152722
摘要
Artificial Intelligence (AI) systems are technologies impacting our lives. The systems learn from existing datasets that record past human decisions. Their performance is measured in terms of accuracy, precision, and recall for reproducing already-known results. Understanding the system's rationale is crucial to check for bias and accept such technology. Explainable AI (XAI) is the area devoted to opening the AI black box, and designing guidelines to build explainable AI systems. Nevertheless, it is important to understand the user's needs for these explanations. This paper presents an investigation of the usefulness of XAI systems in the field of cancer diagnosis from the domain expert's (oncologist) perspective. The main findings suggest domain experts (1) understood the outcomes of the XAI systems; (2) considered XAI outcomes as informative, rather than explanatory; (3) would like to go beyond the fixed presented perspective; and (4) missed the causal relation that would reveal the system's rationale.
科研通智能强力驱动
Strongly Powered by AbleSci AI