强化学习
计算机科学
马尔可夫决策过程
激励
利用
无状态协议
机构设计
人工智能
数学优化
马尔可夫过程
国家(计算机科学)
计算机安全
微观经济学
算法
经济
数学
统计
作者
Max Simchowitz,Aleksandrs Slivkins
出处
期刊:Operations Research
[Institute for Operations Research and the Management Sciences]
日期:2023-08-18
卷期号:72 (3): 983-998
被引量:11
标识
DOI:10.1287/opre.2022.0495
摘要
How do you incentivize self-interested agents to explore when they prefer to exploit? We consider complex exploration problems, where each agent faces the same (but unknown) Markov decision process (MDP). In contrast with traditional formulations of reinforcement learning (RL), agents control the choice of policies, whereas an algorithm can only issue recommendations. However, the algorithm controls the flow of information, and can incentivize the agents to explore via information asymmetry. We design an algorithm which explores all reachable states in the MDP. We achieve provable guarantees similar to those for incentivizing exploration in static, stateless exploration problems studied previously. From the RL perspective, we design RL mechanisms, that is, RL algorithms that interact with self-interested agents and are compatible with their incentives. This is the first paper on RL mechanisms, that is, the first paper on any scenario that combines RL and incentives, to the best of our knowledge.
科研通智能强力驱动
Strongly Powered by AbleSci AI