叙述的
背景(考古学)
经验证据
医疗保健
反对派(政治)
心理干预
叙述性评论
循证医学
类比
价值(数学)
认识论
心理学
人工智能
医学
计算机科学
心理治疗师
机器学习
替代医学
政治学
精神科
病理
哲学
法学
政治
古生物学
语言学
生物
作者
Liam G. McCoy,Connor T. A. Brenna,Stacy Chen,Karina Vold,Sunit Das
标识
DOI:10.1016/j.jclinepi.2021.11.001
摘要
To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application.This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC.We find that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating inexplicable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine.Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI