模态(人机交互)
计算机科学
情态动词
计算机视觉
迭代重建
人工智能
医学影像学
磁共振成像
放射科
医学
化学
高分子化学
作者
Yunlu Yan,Chun-Mei Feng,Yuexiang Li,Ping Li,Rick Siow Mong Goh,Baiying Lei,Weiming Wang,Dagan Feng,Lei Zhu
标识
DOI:10.1109/jbhi.2025.3566217
摘要
While multi-modal learning has been widely used for MRI reconstruction, it relies on paired multi-modal data, which is difficult to acquire in real clinical scenarios. Especially in the federated setting, there is a common issue that several medical institutions suffer from missing modalities or even only have single-modal data. Therefore, it is infeasible to deploy a standard federated learning framework in such conditions. In this paper, we propose a novel communication-efficient federated learning framework (namely Fed-PMG) to address the missing modality challenge in federated multi-modal MRI reconstruction. Specifically, we utilize a pseudo modality generation mechanism to recover the missing modality for each single-modal client by sharing the distribution information of the amplitude spectrum in frequency space. However, the step of sharing the original amplitude spectrum leads to heavy communication costs. To reduce the communication cost, we introduce a clustering scheme to project the set of amplitude spectrum into a finite number of cluster centroids and share them among the clients. With such an elaborate design, our approach can effectively complete the missing modality within an acceptable communication cost. Extensive experimental results demonstrate that our proposed method can outperform state-of-the-art methods and reach a performance similar to the ideal scenario (i.e., all clients have the full set of modalities).
科研通智能强力驱动
Strongly Powered by AbleSci AI