水准点(测量)
计算机科学
强化学习
公制(单位)
机器学习
人工智能
编码(集合论)
分类
选择(遗传算法)
源代码
运营管理
大地测量学
集合(抽象数据类型)
经济
程序设计语言
地理
操作系统
作者
Yongxin Kang,Enmin Zhao,Yifan Zang,Kai Li,Junliang Xing
出处
期刊:Communications in computer and information science
日期:2023-01-01
卷期号:: 189-201
标识
DOI:10.1007/978-981-99-1639-9_16
摘要
Reinforcement learning in sparse reward environments is challenging and has recently received increasing attention, with dozens of new algorithms proposed every year. Despite promising results demonstrated in various sparse reward environments, this domain lacks a unified definition of a sparse reward environment and an experimentally fair way to compare existing algorithms. These issues significantly affect the in-depth analysis of the underlying problem and hinder further studies. This paper proposes a benchmark to unify the selection of environments and the comparison of algorithms. We first define sparsity to describe the proportion of rewarded states in the entire state space and select environments by this sparsity. Inspired by the sparsity concept, we categorize the existing algorithms into two classes. To provide a fair comparison of different algorithms, we propose a new metric along with a standard protocol for performance evaluation. Primary experimental evaluations of seven algorithms in ten environments provide a startup user guide of the proposed benchmark. We hope the proposed benchmark will promote the research of reinforcement learning algorithms in sparse reward environments. The source code of this work is published on https://github.com/simayuhe/ICONIP_Benchmark.git .
科研通智能强力驱动
Strongly Powered by AbleSci AI