计算机科学
动量(技术分析)
趋同(经济学)
新闻聚合器
航程(航空)
梯度下降
随机梯度下降算法
跟踪(心理语言学)
机器学习
航空航天工程
万维网
语言学
哲学
财务
人工神经网络
工程类
经济
经济增长
作者
Zhengjie Yang,Wei Bao,Dong Yuan,Nguyen H. Tran,Albert Y. Zomaya
标识
DOI:10.1109/tpds.2022.3206480
摘要
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset. Conventional FL (FedAvg) employs gradient descent algorithm, which may not be efficient enough. Momentum is able to improve the situation by adding an additional momentum step to accelerate the convergence and has demonstrated its benefits in both centralized and FL environments. It is well-known that Nesterov Accelerated Gradient (NAG) is a more advantageous form of momentum, but it is not clear how to quantify the benefits of NAG in FL so far. This motives us to propose FedNAG, which employs NAG in each worker as well as NAG momentum and model aggregation in the aggregator. We provide a detailed convergence analysis of FedNAG and compare it with FedAvg. Extensive experiments based on real-world datasets and trace-driven simulation are conducted, demonstrating that FedNAG increases the learning accuracy by 3-24% and decreases the total training time by 11-70% compared with the benchmarks under a wide range of settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI