上传
计算机科学
梯度下降
趋同(经济学)
钥匙(锁)
随机梯度下降算法
人工智能
凸函数
分布式计算
算法
理论计算机科学
正多边形
数学
操作系统
人工神经网络
经济增长
经济
几何学
作者
Jiande Sun,Tianyi Chen,Georgios B. Giannakis,Qinmin Yang,Zaiyue Yang
标识
DOI:10.1109/tpami.2020.3033286
摘要
This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. Specifically, the federated learning builds upon the server-worker infrastructure, where the workers calculate local gradients and upload them to the server; then the server obtain the global gradient by aggregating all the local gradients and utilizes it to update the model parameter. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients. Quantizing and skipping result in 'lazy' worker-server communications, which justifies the term Lazily Aggregated Quantized (LAQ) gradient. Theoretically, the LAQ algorithm achieves the same linear convergence as the gradient descent in the strongly convex case, while effecting major savings in the communication in terms of transmitted bits and communication rounds. Empirically, extensive experiments using realistic data corroborate a significant communication reduction compared with state-of-the-art gradient- and stochastic gradient-based algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI