计算机科学
推论
联合学习
过程(计算)
分布式计算
带宽(计算)
机器学习
多样性(控制论)
人工智能
选择(遗传算法)
计算机网络
操作系统
作者
Yongqing Zhao,Qingshuang Sun
标识
DOI:10.1080/09540091.2022.2114428
摘要
Federated learning (FL) is an emerging distributed machine learning technique. However, when dealing with heterogeneous data, a shared global model cannot generalise all devices' local data. Furthermore, the FL training process necessitates frequent parameter communication, which interferes with the limited bandwidth and unstable connections of participating devices. These two issues have a significant impact on FL's effectiveness and efficiency. In this paper, an enhanced communication-efficient personalised FL technique, FedGB, is proposed. Different from existing approaches, FedGB believes that only interacting common information from training results on different devices can improve local personalised training results more effectively. FedGB dynamically selects the backbone structures in the local models to represent the dynamically determined backbone information (common features) in the global model for aggregation. Only interacting common features between different nodes reduce the impact of heterogeneous data to a certain extent. The dynamic adaptive sub-model selection avoids the impact of manually setting the scale of sub-model. FedGB can thus reduce communication overheads while maintaining inference accuracy. The results obtained in a variety of experimental settings show that FedGB can effectively improve communication efficiency and inference accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI