数学
静止点
收敛速度
应用数学
规范(哲学)
极限点
序列(生物学)
牛顿法
可微函数
功能(生物学)
组合数学
数学分析
非线性系统
计算机科学
计算机网络
频道(广播)
物理
量子力学
进化生物学
生物
政治学
法学
遗传学
作者
Yuqia Wu,Shaohua Pan,Xiaoqi Yang
摘要
.This paper is concerned with \(\ell_q\,(0\lt q\lt 1)\) -norm regularized minimization problems with a twice continuously differentiable loss function. For this class of nonconvex and nonsmooth composite problems, many algorithms have been proposed to solve them, most of which are of the first-order type. In this work, we propose a hybrid of the proximal gradient method and the subspace regularized Newton method, called HpgSRN. The whole iterate sequence produced by HpgSRN is proved to have a finite length and to converge to an \(L\) -type stationary point under a mild curve-ratio condition and the Kurdyka–Łojasiewicz property of the cost function; it converges linearly if a further Kurdyka–Łojasiewicz property of exponent \(1/2\) holds. Moreover, a superlinear convergence rate for the iterate sequence is also achieved under an additional local error bound condition. Our convergence results do not require the isolatedness and strict local minimality properties of the \(L\) -stationary point. Numerical comparisons with ZeroFPR, a hybrid of proximal gradient method and quasi-Newton method for the forward-backward envelope of the cost function, proposed in [A. Themelis, L. Stella, and P. Patrinos, SIAM J. Optim., 28 (2018), pp. 2274–2303] for the \(\ell_q\) -norm regularized linear and logistic regressions on real data, indicate that HpgSRN not only requires much less computing time but also yields comparable or even better sparsities and objective function values.Keywords \(\ell_q\) -norm regularized composite optimizationregularized Newton methodglobal convergencesuperlinear convergence rateKL propertylocal error boundMSC codes90C2665K0590C0649J52
科研通智能强力驱动
Strongly Powered by AbleSci AI