计算机科学
数据科学
功能(生物学)
集合(抽象数据类型)
面子(社会学概念)
医疗保健
深层神经网络
人工智能
多样性(控制论)
航程(航空)
深度学习
经济增长
复合材料
程序设计语言
生物
社会科学
材料科学
进化生物学
社会学
经济
作者
Md. Imran Hossain,Ghada Zamzmi,Peter R. Mouton,Md Sirajus Salekin,Yu Sun,Dmitry B. Goldgof
摘要
With the power of parallel processing, large datasets, and fast computational resources, deep neural networks (DNNs) have outperformed highly trained and experienced human experts in medical applications. However, the large global community of healthcare professionals, many of whom routinely face potentially life-or-death outcomes with complex medicolegal consequences, have yet to embrace this powerful technology. The major problem is that most current AI solutions function as a metaphorical black-box positioned between input data and output decisions without a rigorous explanation for their internal processes. With the goal of enhancing trust and improving acceptance of artificial intelligence– (AI) based technology in clinical medicine, there is a large and growing effort to address this challenge using eXplainable AI (XAI), a set of techniques, strategies, and algorithms with an explicit focus on explaining the “hows and whys” of DNNs. Here, we provide a comprehensive review of the state-of-the-art XAI techniques concerning healthcare applications and discuss current challenges and future directions. We emphasize the strengths and limitations of each category, including image, tabular, and textual explanations, and explore a range of evaluation metrics for assessing the effectiveness of XAI solutions. Finally, we highlight promising opportunities for XAI research to enhance the acceptance of DNNs by the healthcare community.
科研通智能强力驱动
Strongly Powered by AbleSci AI