方差减少
还原(数学)
趋同(经济学)
算法
正多边形
凸函数
凸优化
计算机科学
差异(会计)
分布式算法
收敛速度
数学优化
跟踪(教育)
数学
钥匙(锁)
心理学
教育学
几何学
会计
计算机安全
经济
业务
程序设计语言
经济增长
作者
Xia Jiang,Xianlin Zeng,Jian Sun,Jie Chen
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:1
标识
DOI:10.48550/arxiv.2106.14479
摘要
This paper proposes a distributed stochastic algorithm with variance reduction for general smooth non-convex finite-sum optimization, which has wide applications in signal processing and machine learning communities. In distributed setting, large number of samples are allocated to multiple agents in the network. Each agent computes local stochastic gradient and communicates with its neighbors to seek for the global optimum. In this paper, we develop a modified variance reduction technique to deal with the variance introduced by stochastic gradients. Combining gradient tracking and variance reduction techniques, this paper proposes a distributed stochastic algorithm, GT-VR, to solve large-scale non-convex finite-sum optimization over multi-agent networks. A complete and rigorous proof shows that the GT-VR algorithm converges to first-order stationary points with $O(\frac{1}{k})$ convergence rate. In addition, we provide the complexity analysis of the proposed algorithm. Compared with some existing first-order methods, the proposed algorithm has a lower $\mathcal{O}(PM\epsilon^{-1})$ gradient complexity under some mild condition. By comparing state-of-the-art algorithms and GT-VR in experimental simulations, we verify the efficiency of the proposed algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI