Multimodal attention-based deep learning for Alzheimer’s disease diagnosis

模式 计算机科学 人工智能 认知 模态(人机交互) 机器学习 集合(抽象数据类型) 深度学习 认知心理学 心理学 神经科学 社会科学 社会学 程序设计语言
作者
Michal Golovanevsky,Carsten Eickhoff,Ritambhara Singh
出处
期刊:Journal of the American Medical Informatics Association [Oxford University Press]
卷期号:29 (12): 2014-2022 被引量:54
标识
DOI:10.1093/jamia/ocac168
摘要

Abstract Objective Alzheimer’s disease (AD) is the most common neurodegenerative disorder with one of the most complex pathogeneses, making effective and clinically actionable decision support difficult. The objective of this study was to develop a novel multimodal deep learning framework to aid medical professionals in AD diagnosis. Materials and Methods We present a Multimodal Alzheimer’s Disease Diagnosis framework (MADDi) to accurately detect the presence of AD and mild cognitive impairment (MCI) from imaging, genetic, and clinical data. MADDi is novel in that we use cross-modal attention, which captures interactions between modalities—a method not previously explored in this domain. We perform multi-class classification, a challenging task considering the strong similarities between MCI and AD. We compare with previous state-of-the-art models, evaluate the importance of attention, and examine the contribution of each modality to the model’s performance. Results MADDi classifies MCI, AD, and controls with 96.88% accuracy on a held-out test set. When examining the contribution of different attention schemes, we found that the combination of cross-modal attention with self-attention performed the best, and no attention layers in the model performed the worst, with a 7.9% difference in F1-scores. Discussion Our experiments underlined the importance of structured clinical data to help machine learning models contextualize and interpret the remaining modalities. Extensive ablation studies showed that any multimodal mixture of input features without access to structured clinical information suffered marked performance losses. Conclusion This study demonstrates the merit of combining multiple input modalities via cross-modal attention to deliver highly accurate AD diagnostic decision support.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
3秒前
温婉的凝丹完成签到 ,获得积分10
5秒前
blue-鱼完成签到,获得积分10
9秒前
orixero应助tangz采纳,获得10
10秒前
11秒前
Suc发布了新的文献求助10
11秒前
手抓饼啊完成签到,获得积分10
11秒前
烟花应助Lin采纳,获得10
13秒前
cbq完成签到 ,获得积分10
14秒前
海猫食堂完成签到,获得积分10
15秒前
fffff发布了新的文献求助10
16秒前
所所应助鱼在哪儿采纳,获得10
17秒前
20秒前
科研通AI2S应助好好采纳,获得10
21秒前
22秒前
步步完成签到 ,获得积分10
23秒前
爱学习的瑞瑞子完成签到 ,获得积分10
23秒前
领导范儿应助乘风破浪采纳,获得10
24秒前
xie发布了新的文献求助10
24秒前
kk发布了新的文献求助10
27秒前
dream发布了新的文献求助10
28秒前
29秒前
29秒前
情怀应助車侖采纳,获得10
31秒前
复杂的方盒完成签到 ,获得积分10
31秒前
唐飒发布了新的文献求助10
32秒前
33秒前
Lin发布了新的文献求助10
34秒前
shuan完成签到,获得积分10
34秒前
科研通AI2S应助科研通管家采纳,获得10
38秒前
FashionBoy应助科研通管家采纳,获得10
38秒前
共享精神应助科研通管家采纳,获得10
38秒前
38秒前
39秒前
xie完成签到,获得积分10
39秒前
水若冰寒完成签到,获得积分10
39秒前
Lin完成签到,获得积分10
40秒前
shuan发布了新的文献求助30
41秒前
44秒前
bkagyin应助默默的棒棒糖采纳,获得10
45秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Периодизация спортивной тренировки. Общая теория и её практическое применение 310
Mixing the elements of mass customisation 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3779606
求助须知:如何正确求助?哪些是违规求助? 3325116
关于积分的说明 10221269
捐赠科研通 3040209
什么是DOI,文献DOI怎么找? 1668673
邀请新用户注册赠送积分活动 798766
科研通“疑难数据库(出版商)”最低求助积分说明 758535