人工智能
计算机科学
黑匣子
深度学习
皮肤损伤
模式识别(心理学)
简单(哲学)
机器学习
皮肤病科
医学
哲学
认识论
作者
Khalid M. Hosny,Wael Said,Mahmoud Elmezain,Mohamed A. Kassem
标识
DOI:10.1016/j.asoc.2024.111624
摘要
There is often a lack of explanation when artificial intelligence (AI) is used to diagnose skin lesions, which makes the physician unable to interpret and validate the output; thus, diagnostic systems become significantly less safe. In this paper, we proposed a deep inherent learning method to classify seven types of skin lesions. The proposed deep inherent learning was validated using different explanation techniques. Explainable AI (X-AI) was used to explain decision-making processes at the local and global levels. In addition, we provide visual information to help physicians trust the proposed method. The challenging dataset, HAM10000, was used to evaluate the proposed method. Medical practitioners can better understand the mechanisms of black-box AI models using our simple, stage-based X-AI framework. They can trust the proposed method because the rationale for its decisions is explained.
科研通智能强力驱动
Strongly Powered by AbleSci AI