动量(技术分析)
平衡流
球(数学)
流量(数学)
不变(物理)
人工神经网络
功能(生物学)
数学
摄动(天文学)
应用数学
计算机科学
统计物理学
数学优化
物理
数学分析
人工智能
数学物理
几何学
经济
财务
量子力学
进化生物学
生物
作者
Nikola B. Kovachki,Andrew M. Stuart
出处
期刊:Cornell University - arXiv
日期:2019-06-10
被引量:9
摘要
Gradient decent-based optimization methods underpin the parameter training which results in the impressive results now found when testing neural networks. Introducing stochasticity is key to their success in practical problems, and there is some understanding of the role of stochastic gradient decent in this context. Momentum modifications of gradient decent such as Polyak's Heavy Ball method (HB) and Nesterov's method of accelerated gradients (NAG), are widely adopted. In this work, our focus is on understanding the role of momentum in the training of neural networks, concentrating on the common situation in which the momentum contribution is fixed at each step of the algorithm; to expose the ideas simply we work in the deterministic setting. We show that, contrary to popular belief, standard implementations of fixed momentum methods do no more than act to rescale the learning rate. We achieve this by showing that the momentum method converges to a gradient flow, with a momentum-dependent time-rescaling, using the method of modified equations from numerical analysis. Further we show that the momentum method admits an exponentially attractive invariant manifold on which the dynamic reduces to a gradient flow with respect to a modified loss function, equal to the original one plus a small perturbation.
科研通智能强力驱动
Strongly Powered by AbleSci AI