强化学习
人工智能
机器人
计算机科学
范围(计算机科学)
运动学
机器学习
学习迁移
控制工程
工程类
经典力学
物理
程序设计语言
作者
Wei Zhu,Xian Guo,Dai Owaki,Kyo Kutsuzawa,Mitsuhiro Hayashibe
标识
DOI:10.1109/tnnls.2021.3112718
摘要
The state-of-the-art reinforcement learning (RL) techniques have made innumerable advancements in robot control, especially in combination with deep neural networks (DNNs), known as deep reinforcement learning (DRL). In this article, instead of reviewing the theoretical studies on RL, which were almost fully completed several decades ago, we summarize some state-of-the-art techniques added to commonly used RL frameworks for robot control. We mainly review bioinspired robots (BIRs) because they can learn to locomote or produce natural behaviors similar to animals and humans. With the ultimate goal of practical applications in real world, we further narrow our review scope to techniques that could aid in sim-to-real transfer. We categorized these techniques into four groups: 1) use of accurate simulators; 2) use of kinematic and dynamic models; 3) use of hierarchical and distributed controllers; and 4) use of demonstrations. The purposes of these four groups of techniques are to supply general and accurate environments for RL training, improve sampling efficiency, divide and conquer complex motion tasks and redundant robot structures, and acquire natural skills. We found that, by synthetically using these techniques, it is possible to deploy RL on physical BIRs in actuality.
科研通智能强力驱动
Strongly Powered by AbleSci AI