数理经济学
计算机科学
单调多边形
纳什均衡
随机博弈
博弈论
序列(生物学)
极限(数学)
数学
几何学
遗传学
生物
数学分析
作者
Benoît Duvocelle,Panayotis Mertikopoulos,Mathias Staudigl,Dries Vermeulen
标识
DOI:10.1287/moor.2022.1283
摘要
We examine the long-run behavior of multiagent online learning in games that evolve over time. Specifically, we focus on a wide class of policies based on mirror descent, and we show that the induced sequence of play (a) converges to a Nash equilibrium in time-varying games that stabilize in the long run to a strictly monotone limit, and (b) it stays asymptotically close to the evolving equilibrium of the sequence of stage games (assuming they are strongly monotone). Our results apply to both gradient- and payoff-based feedback—that is, when players only get to observe the payoffs of their chosen actions. Funding: This research was partially supported by the European Cooperation in Science and Technology COST Action [Grant CA16228] “European Network for Game Theory” (GAMENET). P. Mertikopoulos is grateful for financial support by the French National Research Agency (ANR) in the framework of the “Investissements d’avenir” program [Grant ANR-15-IDEX-02], the LabEx PERSYVAL [Grant ANR-11-LABX-0025-01], MIAI@Grenoble Alpes [Grant ANR-19-P3IA-0003], and the ALIAS [Grant ANR-19-CE48-0018-01].
科研通智能强力驱动
Strongly Powered by AbleSci AI