可解释性
人工智能
机器学习
计算机科学
认知科学
数据科学
心理学
作者
J. Henriques,Teresa Rocha,P. Carvalho,Catarina Silva,S. Paredes
出处
期刊:IFMBE proceedings
日期:2024-01-01
卷期号:: 81-94
标识
DOI:10.1007/978-3-031-59216-4_9
摘要
The performances achieved by machine learning models have demonstrated a high potential to revolutionise the support of clinical decision-making. However, in opposition to the high performance, the lack of transparency of these models has been pointed out as one of the major limits to their adoption in daily healthcare applications. If accomplished, transparency issues, including interpretability and explainability would contribute to a better understanding of how a model works, providing a justification for its outcomes, increasing confidence in the use of such models, and effectively assisting clinicians in decision-making. Explainable Artificial Intelligence, as a recent research field, has proposed several approaches aiming at creating more explainable models, whilst maintaining high performances. This work presents a short overview of the state-of-the-art, as well as the current challenges associated with the interpretability and explainability of machine learning models. Further, future directions for interpretable machine learning in the clinical domain are outlined, in particular the introduction of reliability measures to increase the confidence of professionals, and the development of hybrid solutions, able to integrate á prior domain knowledge (clinical evidence) in the data-driven process.
科研通智能强力驱动
Strongly Powered by AbleSci AI