ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation

强化学习 计算机科学 代表(政治) 人工智能 进化算法 一般化 渡线 进化计算 航程(航空) 机器学习 进化机器人 数学优化 数学 法学 复合材料 材料科学 数学分析 政治 政治学
作者
Pengyi Li,Hongyao Tang,Jianye Hao,Yufeng Zheng,Xi’an Fu,Zhaopeng Meng
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2210.17375
摘要

Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. However, existing works on combining Deep RL and EA have two common drawbacks: 1) the RL agent and EA agents learn their policies individually, neglecting efficient sharing of useful common knowledge; 2) parameter-level policy optimization guarantees no semantic level of behavior evolution for the EA side. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re$^2$), a novel solution to the aforementioned two drawbacks. The key idea of ERL-Re$^2$ is two-scale representation: all EA and RL policies share the same nonlinear state representation while maintaining individual} linear policy representations. The state representation conveys expressive common features of the environment learned by all the agents collectively; the linear policy representation provides a favorable space for efficient policy optimization, where novel behavior-level crossover and mutation operations can be performed. Moreover, the linear policy representation allows convenient generalization of policy fitness with the help of the Policy-extended Value Function Approximator (PeVFA), further improving the sample efficiency of fitness estimation. The experiments on a range of continuous control tasks show that ERL-Re$^2$ consistently outperforms advanced baselines and achieves the State Of The Art (SOTA). Our code is available on https://github.com/yeshenpy/ERL-Re2.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
LZ01发布了新的文献求助10
1秒前
haocong完成签到,获得积分10
1秒前
JxJ完成签到,获得积分10
3秒前
周凡淇发布了新的文献求助30
3秒前
5秒前
刘wt完成签到,获得积分10
6秒前
jl完成签到,获得积分10
7秒前
7秒前
8秒前
8秒前
8秒前
9秒前
脑洞疼应助shmily采纳,获得10
9秒前
haocong发布了新的文献求助10
9秒前
cherry完成签到,获得积分10
9秒前
昏睡的冰双完成签到,获得积分10
11秒前
科研通AI6.1应助神海采纳,获得10
11秒前
jl发布了新的文献求助10
11秒前
yyj完成签到,获得积分10
13秒前
个性凡儿发布了新的文献求助10
13秒前
13秒前
赖风娇发布了新的文献求助10
13秒前
14秒前
传奇3应助饮汽水采纳,获得30
15秒前
15秒前
15秒前
16秒前
17秒前
Orange应助凌乱采纳,获得10
18秒前
QH完成签到 ,获得积分10
18秒前
田様应助谷粱可愁采纳,获得10
19秒前
meng发布了新的文献求助20
19秒前
赵大宝完成签到,获得积分10
20秒前
123123发布了新的文献求助10
21秒前
科研通AI6.1应助My采纳,获得10
21秒前
思源应助高贵的盼雁采纳,获得10
21秒前
要成功发布了新的文献求助10
22秒前
kerguelen完成签到,获得积分10
22秒前
22秒前
难过的丹烟完成签到,获得积分10
22秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
Pulse width control of a 3-phase inverter with non sinusoidal phase voltages 777
The Cambridge Handbook of Second Language Acquisition (2nd)[第二版] 666
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6402942
求助须知:如何正确求助?哪些是违规求助? 8221047
关于积分的说明 17423602
捐赠科研通 5455579
什么是DOI,文献DOI怎么找? 2883142
邀请新用户注册赠送积分活动 1859441
关于科研通互助平台的介绍 1700935