指数
数学
迭代函数
收敛速度
趋同(经济学)
极限(数学)
序列(生物学)
组合数学
上下界
应用数学
离散数学
数学分析
计算机科学
生物
频道(广播)
哲学
遗传学
经济增长
经济
语言学
计算机网络
作者
Li Xiao,Andre Milzarek,Junwen Qiu
摘要
.We study the random reshuffling (\(\mathsf{RR}\)) method for smooth nonconvex optimization problems with a finite-sum structure. Though this method is widely utilized in practice, e.g., in the training of neural networks, its convergence behavior is only understood in several limited settings. In this paper, under the well-known Kurdyka–Łojasiewicz (KL) inequality, we establish strong limit-point convergence results for \(\mathsf{RR}\) with appropriate diminishing step sizes; namely, the whole sequence of iterates generated by \(\mathsf{RR}\) is convergent and converges to a single stationary point in an almost sure sense. In addition, we derive the corresponding rate of convergence, depending on the KL exponent and suitably selected diminishing step sizes. When the KL exponent lies in \([0,\frac 12]\), the convergence is at a rate of \(\mathcal{O}(t^{-1})\) with \(t\) counting the number of iterations. When the KL exponent belongs to \((\frac 12,1)\), our derived convergence rate is of the form \(\mathcal{O}(t^{-q})\) with \(q\in (0,1)\) depending on the KL exponent. The standard KL inequality-based convergence analysis framework only applies to algorithms with a certain descent property. We conduct a novel convergence analysis for the nondescent \(\mathsf{RR}\) method with diminishing step sizes based on the KL inequality, which generalizes the standard KL framework. We summarize our main steps and core ideas in an informal analysis framework, which is of independent interest. As a direct application of this framework, we establish similar strong limit-point convergence results for the reshuffled proximal point method.Keywordsrandom reshufflingKurdyka–Łojasiewicz frameworkstrong limit-point convergenceconvergence ratesMSC codes90C3090C0690C2690C15
科研通智能强力驱动
Strongly Powered by AbleSci AI