增广拉格朗日法
数学
拉格朗日松弛
拉格朗日
数学优化
应用数学
动量(技术分析)
约束优化问题
随机优化
最优化问题
财务
经济
作者
Qingjiang Shi,Xiao Wang,Hao Wang
标识
DOI:10.1287/moor.2022.0193
摘要
Nonconvex constrained stochastic optimization has emerged in many important application areas. Subject to general functional constraints, it minimizes the sum of an expectation function and a nonsmooth regularizer. Main challenges arise because of the stochasticity in the random integrand and the possibly nonconvex functional constraints. To address these issues, we propose a momentum-based linearized augmented Lagrangian method (MLALM). MLALM adopts a single-loop framework and incorporates a recursive momentum scheme to compute the stochastic gradient, which enables the construction of a stochastic approximation to the augmented Lagrangian function. We provide an analysis of global convergence of MLALM. Under mild conditions and with unbounded penalty parameters, we show that the sequences of average stationarity measure and constraint violations are convergent in expectation. Under a constraint qualification assumption, the sequences of average constraint violation and complementary slackness measure converge to zero in expectation. We also explore properties of those related metrics when penalty parameters are bounded. Furthermore, we investigate oracle complexities of MLALM in terms of the total number of stochastic gradient evaluations to find an ϵ-stationary point and an ϵ-Karush -Kuhn -Tucker point when assuming the constraint qualification. Numerical experiments on two types of test problems reveal promising performances of the proposed algorithm. Funding: This work was supported by the National Natural Science Foundation of China [Grant 12271278], the Major Key Project of PCL [Grant PCL2022A05], and the Natural Science Foundation of Shanghai [Grant 21ZR1442800].
科研通智能强力驱动
Strongly Powered by AbleSci AI