遮罩(插图)
动作(物理)
强化学习
计算机科学
集合(抽象数据类型)
过程(计算)
人工智能
算法
机器学习
艺术
物理
量子力学
视觉艺术
程序设计语言
操作系统
作者
Shengyi Huang,Santiago Ontañón
出处
期刊:Proceedings of the ... International Florida Artificial Intelligence Research Society Conference
日期:2022-05-04
卷期号:35
被引量:238
标识
DOI:10.32473/flairs.v35i.130584
摘要
In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action distribution predicted by the learned policy is likely to be invalid according to the game rules (e.g., walking into a wall). The usual approach to deal with this problem in policy gradient algorithms is to “mask out” invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we 1) show theoretical justification for such a practice, 2) empirically demonstrate its importance as the space of invalid actions grows, and 3) provide further insights by evaluating different action masking regimes, such as removing masking after an agent has been trained using masking.
科研通智能强力驱动
Strongly Powered by AbleSci AI