计算机科学
联合学习
水准点(测量)
趋同(经济学)
人工智能
机器学习
GSM演进的增强数据速率
功能(生物学)
数据挖掘
大地测量学
地理
经济
经济增长
进化生物学
生物
作者
Xuanming Ni,Xinyuan Shen,Huimin Zhao
标识
DOI:10.1016/j.eswa.2021.116310
摘要
Federated learning is an attractive distributed learning paradigm, which allows resource-constrained edge computing devices to cooperatively train machine learning models, while keeping data locally. However, the non-IID data distribution across devices is one of the main challenges that affect the performance of federated optimization algorithms. Inspired by knowledge distillation, this paper proposes a federated optimization algorithm, Federated Codistillation (FedCodl), in which a distillation term is added to the local objective function, so that local models can be trained on outputs of the global model. We further extend FedCodl to Federated Two-way Codistillation (Fed2Codl) to personalize local models for each device, meanwhile the global model is retained and iteratively updated in parallel. Then, we theoretically provide convergence guarantees for our approaches when learning strongly convex and smooth models. Finally, extensive experiments on federated benchmark datasets demonstrate that our approaches can achieve superior performance under the challenge of multiple non-IID data distribution settings on traditional and deep learning architectures, in terms of classification accuracy and communication efficiency. The biggest accuracy improvement against the standard Federated Averaging (FedAvg) framework is up to 22.26% over all communication rounds. Our results show the potential of the proposed approaches for further applications in federated learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI