强化学习
可扩展性
计算机科学
班级(哲学)
纳什均衡
最佳反应
人工智能
秩(图论)
公制(单位)
数学优化
理论计算机科学
机器学习
数学
组合数学
运营管理
数据库
经济
作者
Rui Yan,Xiaoming Duan,Zongying Shi,Yisheng Zhong,Jason R. Marden,Francesco Bullo
标识
DOI:10.1109/tac.2021.3085171
摘要
Multiagent policy evaluation and seeking are long-standing challenges in developing theories for multiagent reinforcement learning (MARL), due to multidimensional learning goals, nonstationary environment, and scalability issues in the joint policy space. This article introduces two metrics grounded on a game-theoretic solution concept called sink equilibrium, for the evaluation, ranking, and computation of policies in multiagent learning. We adopt strict best response dynamics (SBRDs) to model selfish behaviors at a meta-level for MARL. Our approach can deal with dynamical cyclical behaviors (unlike approaches based on Nash equilibria and Elo ratings), and is more compatible with single-agent reinforcement learning than $\alpha$ -rank, which relies on weakly better responses. We first consider settings where the difference between the largest and second largest equilibrium metric has a known lower bound. With this knowledge, we propose a class of perturbed SBRD with the following property: only policies with maximum metric are observed with nonzero probability for a broad class of stochastic games with finite memory. We then consider settings where the lower bound for the difference is unknown. For this setting, we propose a class of perturbed SBRD such that the metrics of the policies observed with nonzero probability differ from the optimal by any given tolerance. The proposed perturbed SBRD addresses the scalability issue and opponent-induced nonstationarity by fixing the strategies of others for the learning agent, and uses empirical game-theoretic analysis to estimate payoffs for each strategy profile obtained due to the perturbation.
科研通智能强力驱动
Strongly Powered by AbleSci AI