极小极大
组合数学
有界函数
数学
正多边形
静止点
梯度下降
凹函数
功能(生物学)
趋同(经济学)
凸函数
极限(数学)
极大极小定理
数学优化
离散数学
计算机科学
数学分析
几何学
人工智能
进化生物学
人工神经网络
经济
生物
经济增长
作者
Tianyi Lin,Chi Jin,Michael I. Jordan
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:110
标识
DOI:10.48550/arxiv.1906.00331
摘要
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}} f(\mathbf{x}, \mathbf{y})$, where $f$ is nonconvex in $\mathbf{x}$ but concave in $\mathbf{y}$ and $\mathcal{Y}$ is a convex and bounded set. One of the most popular algorithms for solving this problem is the celebrated gradient descent ascent (GDA) algorithm, which has been widely used in machine learning, control theory and economics. Despite the extensive convergence results for the convex-concave setting, GDA with equal stepsize can converge to limit cycles or even diverge in a general setting. In this paper, we present the complexity results on two-time-scale GDA for solving nonconvex-concave minimax problems, showing that the algorithm can find a stationary point of the function $\Phi(\cdot) := \max_{\mathbf{y} \in \mathcal{Y}} f(\cdot, \mathbf{y})$ efficiently. To the best our knowledge, this is the first nonasymptotic analysis for two-time-scale GDA in this setting, shedding light on its superior practical performance in training generative adversarial networks (GANs) and other real applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI