单调多边形
非线性系统
有界函数
序列(生物学)
数学
收敛速度
趋同(经济学)
应用数学
算法
数学优化
计算机科学
数学分析
物理
几何学
量子力学
频道(广播)
计算机网络
生物
经济
遗传学
经济增长
作者
Jie Li,Shengbo Eben Li,Jingliang Duan,Yao Lyu,Wenjun Zou,Yang Guan,Yuming Yin
标识
DOI:10.1109/tac.2023.3266277
摘要
Though policy evaluation error profoundly affects the direction of policy optimization and the convergence property, it is usually ignored in policy iteration methods. This work incorporates the practical inexact policy evaluation into a simultaneous policy update paradigm to reach the Nash equilibrium of the nonlinear zero-sum games. In the proposed algorithm, the restriction of precise policy evaluation is removed by bounded evaluation error characterized by Hamiltonian without sacrificing convergence guarantees. By exploiting Fréchet differential, the practical iterative process of value function with estimation error is converted into the Newton's method with variable steps, which are inversely proportional to evaluation errors. Accordingly, we construct a monotone scalar sequence that shares the same Newton's method with the value sequence to bound the error of the value function, which enjoys an exponential convergence rate. Numerical results show its convergence in affine systems, and the potential to cope with general nonlinear plants.
科研通智能强力驱动
Strongly Powered by AbleSci AI