可解释性
痴呆
机器学习
人工智能
计算机科学
斯科普斯
临床实习
数据科学
梅德林
疾病
医学
病理
政治学
法学
家庭医学
作者
Sophie Martin,Florence Townend,Frederik Barkhof,James H. Cole
摘要
Abstract Introduction Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently “interpretable,” whereas post hoc “explainability” methods can be used for other models. Methods Here we sought to summarize the state‐of‐the‐art of interpretable machine learning for dementia. Results We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets. Discussion Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia‐related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient‐specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI