强化学习
加权
适应性
适应(眼睛)
计算机科学
学习环境
期限(时间)
放松(心理学)
渐进式学习
人工智能
钢筋
机器学习
数学
心理学
社会心理学
物理
放射科
数学教育
生物
神经科学
医学
量子力学
生态学
作者
Zhi Wang,Han‐Xiong Li,Chunlin Chen
标识
DOI:10.1109/tnnls.2019.2927320
摘要
In this paper, a systematic incremental learning method is presented for reinforcement learning in continuous spaces where the learning environment is dynamic. The goal is to adjust the previously learned policy in the original environment to a new one incrementally whenever the environment changes. To improve the adaptability to the ever-changing environment, we propose a two-step solution incorporated with the incremental learning procedure: policy relaxation and importance weighting. First, the behavior policy is relaxed to a random one in the initial learning episodes to encourage a proper exploration in the new environment. It alleviates the conflict between the new information and the existing knowledge for a better adaptation in the long term. Second, it is observed that episodes receiving higher returns are more in line with the new environment, and hence contain more new information. During parameter updating, we assign higher importance weights to the learning episodes that contain more new information, thus encouraging the previous optimal policy to be faster adapted to a new one that fits in the new environment. Empirical studies on continuous controlling tasks with varying configurations verify that the proposed method achieves a significantly faster adaptation to various dynamic environments than the baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI