计算机科学
计算
人工神经网络
人工智能
正规化(语言学)
趋同(经济学)
机器学习
联合学习
深层神经网络
比例(比率)
分布式计算
算法
经济增长
量子力学
物理
经济
作者
Jialuo Cui,Qiong Wu,Zhi Zhou,Xu Chen
标识
DOI:10.1109/iccc55456.2022.9880769
摘要
As a privacy-preserving paradigm of decentralized machine learning, federated learning (FL) has become a hot spot in the field of machine learning. Existing FL approaches generally assume that the global model can be deployed and trained on any client. However, in practical applications, the devices participated in FL are often heterogeneous and have different computation capacities, resulting in the difficulty of large neural network model training. The current solutions, such as reducing the scale of the global model to fit all clients or removing weak devices to deploy a larger model, will lead to model accuracy degradation, owing to the limitation of model scale or the loss of data on weak clients. To address the device heterogeneity issue inherent in FL, we propose FedBranch, a heterogeneous FL framework based on multi-branch neural network model. Its core idea is to assign a proper branch model to each client according to their computation capacity. In FedBranch, a layer-wise aggregation method is designed to address aggregation of different branches. Meanwhile, we introduce a model regularization method to improve the convergence efficiency and model performance of FedBranch. Besides, we propose a training task offloading algorithm based on Split Learning to safely and effectively share training tasks among different branch models. Extensive experiments conducted on different datasets demonstrate that our FedBranch method has higher convergence efficiency and model accuracy than existing federated learning methods in various heterogeneous scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI