强化学习
计算机科学
集合(抽象数据类型)
时差学习
理论(学习稳定性)
人工智能
功能(生物学)
采样(信号处理)
机器学习
数学优化
数学
计算机视觉
进化生物学
生物
滤波器(信号处理)
程序设计语言
作者
Baturay Sağlam,Furkan B. Mutlu,Dogan C. Cicek,Süleyman S. Kozat
摘要
A widely-studied deep reinforcement learning (RL) technique known as Prioritized Experience Replay (PER) allows agents to learn from transitions sampled with non-uniform probability proportional to their temporal-difference (TD) error. Although it has been shown that PER is one of the most crucial components for the overall performance of deep RL methods in discrete action domains, many empirical studies indicate that it considerably underperforms off-policy actor-critic algorithms. We theoretically show that actor networks cannot be effectively trained with transitions that have large TD errors. As a result, the approximate policy gradient computed under the Q-network diverges from the actual gradient computed under the optimal Q-function. Motivated by this, we introduce a novel experience replay sampling framework for actor-critic methods, which also regards issues with stability and recent findings behind the poor empirical performance of PER. The introduced algorithm suggests a new branch of improvements to PER and schedules effective and efficient training for both actor and critic networks. An extensive set of experiments verifies our theoretical findings, showing that our method outperforms competing approaches and achieves state-of-the-art results over the standard off-policy actor-critic algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI