计算机科学
域适应
适应(眼睛)
数据挖掘
变压器
标记数据
人工智能
机器学习
相关性
几何学
数学
量子力学
分类器(UML)
光学
物理
电压
作者
Baiying Lei,Yun Zhu,Enmin Liang,Peng Yang,Shaobin Chen,Huoyou Hu,Haoran Xie,Ziyi Wei,Fei Hao,Xuegang Song,Tianfu Wang,Xiaohua Xiao,Shuqiang Wang,Hongbin Han
标识
DOI:10.1109/tmi.2023.3300725
摘要
In multi-site studies of Alzheimer's disease (AD), the difference of data in multi-site datasets leads to the degraded performance of models in the target sites. The traditional domain adaptation method requires sharing data from both source and target domains, which will lead to data privacy issue. To solve it, federated learning is adopted as it can allow models to be trained with multi-site data in a privacy-protected manner. In this paper, we propose a multi-site federated domain adaptation framework via Transformer (FedDAvT), which not only protects data privacy, but also eliminates data heterogeneity. The Transformer network is used as the backbone network to extract the correlation between the multi-template region of interest features, which can capture the brain abundant information. The self-attention maps in the source and target domains are aligned by applying mean squared error for subdomain adaptation. Finally, we evaluate our method on the multi-site databases based on three AD datasets. The experimental results show that the proposed FedDAvT is quite effective, achieving accuracy rates of 88.75%, 69.51%, and 69.88% on the AD vs. NC, MCI vs. NC, and AD vs. MCI two-way classification tasks, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI