计算机科学
架空(工程)
分割
滤波器(信号处理)
图像分割
人工智能
图像(数学)
蒸馏
领域(数学分析)
数据挖掘
机器学习
计算机视觉
数学分析
化学
数学
有机化学
操作系统
标识
DOI:10.1007/978-981-99-4749-2_30
摘要
Federated learning promises to alleviate this problem of low labelled data in medical image segmentation while protecting the privacy and security of the data. However, medical image segmentation under federated learning also has many problems, such as how to achieve high-precision segmentation using federated models in the presence of data imbalance, whether the communication efficiency in the federated process can be effectively improved, and how to effectively solve the model gradient explosion in federated distillation. Based on the above difficulties, this paper proposes a new optimization algorithm for federated distillation. First, we design a small-scale network model in the communication between the client and the central server to reduce the communication overhead; then, we design a distillation method to keep the local model stable. Finally, we add a coordinator for the central server before aggregation and introduce a model filtering mechanism to effectively filter and evaluate the client model parameters and weights to keep the global model optimization, while preventing the gradient explosion problem under malicious or extreme models and improving the accuracy of target domain segmentation. We conducted experiments on two medical image segmentation tasks and demonstrated that our approach achieves effective results on non-IID data, where the average DICE coefficient can reach 82.79% while the communication overhead is reduced by a factor of 16.
科研通智能强力驱动
Strongly Powered by AbleSci AI