计算机科学
联合学习
建筑
个性化学习
编码(集合论)
机器学习
领域(数学分析)
人工智能
特征学习
代表(政治)
数据科学
集合(抽象数据类型)
艺术
数学分析
数学
政治
教学方法
政治学
法学
开放式学习
视觉艺术
程序设计语言
合作学习
作者
Yu Feng,Yangli-ao Geng,Yifan Zhu,Zongfu Han,Xie Yu,Kaiwen Xue,Haoran Luo,Manlong Sun,Guangwei Zhang,Meina Song
标识
DOI:10.1145/3696410.3714561
摘要
Federated learning (FL) has gained widespread attention for its privacy-preserving and collaborative learning capabilities. Due to significant statistical heterogeneity, traditional FL struggles to generalize a shared model across diverse data domains. Personalized federated learning addresses this issue by dividing the model into a globally shared part and a locally private part, with the local model correcting representation biases introduced by the global model. Nevertheless, locally converged parameters more accurately capture domain-specific knowledge, and current methods overlook the potential benefits of these parameters. To address these limitations, we propose PM-MoE architecture. This architecture integrates a mixture of personalized modules and an energy-based personalized modules denoising, enabling each client to select beneficial personalized parameters from other clients. We applied the PM-MoE architecture to nine recent model-split-based personalized federated learning algorithms, achieving performance improvements with minimal additional training. Extensive experiments on six widely adopted datasets and two heterogeneity settings validate the effectiveness of our approach. The source code is available at \url{https://github.com/dannis97500/PM-MOE}.
科研通智能强力驱动
Strongly Powered by AbleSci AI