数学优化
对偶(语法数字)
趋同(经济学)
近端梯度法
计算机科学
凸优化
经验风险最小化
缩小
坐标下降
梯度法
正多边形
随机梯度下降算法
收敛速度
凸函数
应用数学
数学
人工智能
人工神经网络
钥匙(锁)
几何学
艺术
文学类
计算机安全
经济
经济增长
作者
Qihang Lin,Zhaosong Lu,Lin Xiao
出处
期刊:Neural Information Processing Systems
日期:2014-12-08
卷期号:27: 3059-3067
被引量:98
摘要
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized empirical risk minimization (ERM) problem, and devise efficient implementations that avoid full-dimensional vector operations. For ill-conditioned ERM problems, our method obtains improved convergence rates than the state-of-the-art stochastic dual coordinate ascent (SDCA) method.
科研通智能强力驱动
Strongly Powered by AbleSci AI