计算机科学
联合学习
稳健性(进化)
聚类分析
编配
人工智能
机器学习
水准点(测量)
艺术
音乐剧
生物化学
化学
大地测量学
视觉艺术
基因
地理
作者
Tiandi Ye,Senhui Wei,Jamie Cui,Cen Chen,Yingnan Fu,Ming Gao
标识
DOI:10.1007/978-3-031-30637-2_45
摘要
Federated learning (FL) is a special distributed machine learning paradigm, where decentralized clients collaboratively train a model under the orchestration of a global server while protecting users' data privacy. Concept shift across clients as a specific type of data heterogeneity challenges the generic federated learning methods, which output the same model for all clients. Clustered federated learning is a natural choice for addressing concept shift. However, we empirically show that existing state-of-the-art clustered federated learning methods cannot match some personalized learning methods. We attribute it to the fact they group clients based on their entangled signals, which results in poor clustering. To tackle the problem, in this paper, we devise a lightweight disentanglement mechanism, which explicitly captures client-invariant and client-specific patterns. Incorporating the disentanglement mechanism into clients' local training, we propose a robust clustered federated learning framework (RCFL), which groups the clients based on their client-specific signals. We conduct extensive experiments on three popular benchmark datasets to show the superiority of RCFL over the competitive baselines, including personalized federated learning methods and clustered federated learning methods. Additional experiments demonstrate its robustness against several sensitive factors. The ablation study verifies the effectiveness of our introduced components in RCFL.
科研通智能强力驱动
Strongly Powered by AbleSci AI