加速
计算机科学
偏移量(计算机科学)
趋同(经济学)
收敛速度
数学优化
效率低下
全局优化
局部最优
人工智能
算法
机器学习
数学
并行计算
钥匙(锁)
经济
经济增长
计算机安全
微观经济学
程序设计语言
作者
Y.H. Sun,Li Shen,Hao Sun,Liang Ding,Dacheng Tao
标识
DOI:10.1109/tpami.2023.3300886
摘要
Adaptive optimization has achieved notable success for distributed learning while extending adaptive optimizer to federated Learning (FL) suffers from severe inefficiency, including (i) rugged convergence due to inaccurate gradient estimation in global adaptive optimizer; (ii) client drifts exacerbated by local over-fitting with the local adaptive optimizer. In this work, we propose a novel momentum-based algorithm via utilizing the global gradient descent and locally adaptive amended optimizer to tackle these difficulties. Specifically, we incorporate a locally amended technique to the adaptive optimizer, named Federated Local ADaptive Amended optimizer ( FedLADA ), which estimates the global average offset in the previous communication round and corrects the local offset through a momentum-like term to further improve the empirical training speed and mitigate the heterogeneous over-fitting. Theoretically, we establish the convergence rate of FedLADA with a linear speedup property on the non-convex case under the partial participation settings. Moreover, we conduct extensive experiments on the real-world dataset to demonstrate the efficacy of our proposed FedLADA , which could greatly reduce the communication rounds and achieves higher accuracy than several baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI