模式
情态动词
计算机科学
疾病
人工智能
互斥
相互信息
医学
理论计算机科学
化学
病理
社会科学
社会学
高分子化学
作者
Min Gu Kwak,Lingchao Mao,Zhiyang Zheng,Yi Su,Fleming Lure,Jing Li
标识
DOI:10.1109/tase.2025.3556290
摘要
Early detection of Alzheimer's Disease (AD) is crucial for timely interventions and optimizing treatment outcomes. Integrating multimodal neuroimaging datasets can enhance the early detection of AD. However, models must address the challenge of incomplete modalities, a common issue in real-world scenarios, as not all patients have access to all modalities due to practical constraints such as cost and availability. We propose a deep learning framework employing Incomplete Cross-modal Mutual Knowledge Distillation (IC-MKD) to model different sub-cohorts of patients based on their available modalities. In IC-MKD, the multimodal model (e.g., MRI and PET) serves as a teacher, while the single-modality model (e.g., MRI only) is the student. Our IC-MKD framework features three components: a Modality-Disentangling Teacher (MDT) model designed through information disentanglement, a student model that learns from classification errors and MDT's knowledge, and the teacher model enhanced via distilling the student's single-modal feature extraction capabilities. Moreover, we show the effectiveness of the proposed method through theoretical analysis and validate its performance with simulation studies. In addition, our method is demonstrated through a case study with Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets, underscoring the potential of artificial intelligence in addressing incomplete multimodal neuroimaging datasets and advancing early AD detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI