随机梯度下降算法
动量(技术分析)
梯度下降
趋同(经济学)
水准点(测量)
计算机科学
收敛速度
常量(计算机编程)
算法
地铁列车时刻表
人工神经网络
数学优化
人工智能
机器学习
应用数学
数学
地质学
经济增长
操作系统
频道(广播)
经济
计算机网络
程序设计语言
大地测量学
财务
作者
Bao Wang,Tan N. Nguyen,Tao Sun,Andrea L. Bertozzi,Richard G. Baraniuk,Stanley Osher
摘要
Stochastic gradient descent (SGD) algorithms, with constant momentum and its variants such as Adam, are the optimization methods of choice for training deep neural networks (DNNs). There is great interest in speeding up the convergence of these methods due to their high computational expense. Nesterov accelerated gradient with a time-varying momentum (NAG) improves the convergence rate of gradient descent for convex optimization using a specially designed momentum; however, it accumulates error when the stochastic gradient is used, slowing convergence at best and diverging at worst. In this paper, we propose scheduled restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance, in training ResNet-200 for ImageNet classification, SRSGD achieves an error rate of 20.93% versus the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with significantly fewer training epochs compared to the SGD baseline. Our implementation of SRSGD is available at https://github.com/minhtannguyen/SRSGD.
科研通智能强力驱动
Strongly Powered by AbleSci AI