计算机科学
图像分割
分割
人工智能
蒸馏
图像(数学)
计算机视觉
模式识别(心理学)
化学
有机化学
作者
Soopil Kim,Hee Jung Park,Philip Chikontwe,Myeongkyun Kang,Kyong Hwan Jin,Ehsan Adeli,Kilian M. Pohl,Sang Hyun Park
标识
DOI:10.1109/tmi.2025.3525581
摘要
Federated learning (FL) methods for multi-organ segmentation in CT scans are gaining popularity, but generally require numerous rounds of parameter exchange between a central server and clients. This repetitive sharing of parameters between server and clients may not be practical due to the varying network infrastructures of clients and the large transmission of data. Further increasing repetitive sharing results from data heterogeneity among clients, i.e., clients may differ with respect to the type of data they share. For example, they might provide label maps of different organs (i.e. partial labels) as segmentations of all organs shown in the CT are not part of their clinical protocol. To this end, we propose an efficient communication approach for FL with partial labels. Specifically, parameters of local models are transmitted once to a central server and the global model is trained via knowledge distillation (KD) of the local models. While one can make use of unlabeled public data as inputs for KD, the model accuracy is often limited due to distribution shifts between local and public datasets. Herein, we propose to generate synthetic images from clients' models as additional inputs to mitigate data shifts between public and local data. In addition, our proposed method offers flexibility for additional finetuning through several rounds of communication using existing FL algorithms, leading to enhanced performance. Extensive evaluation on public datasets in few communication FL scenario reveals that our approach substantially improves over state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI