中心图形发生器
前馈
仿人机器人
计算机科学
模仿
人工神经网络
感觉系统
适应性
步态
人工智能
机器人
节奏
控制工程
工程类
神经科学
心理学
物理医学与康复
生物
哲学
生态学
医学
美学
作者
Guanda Li,Auke Jan Ijspeert,Mitsuhiro Hayashibe
出处
期刊:IEEE robotics and automation letters
日期:2024-04-15
卷期号:9 (6): 5190-5197
被引量:16
标识
DOI:10.1109/lra.2024.3388842
摘要
Humans have many redundancies in their bodies and can make effective use of them to adapt to changes in the environment while walking. They can also vary their walking speed in a wide range. Human-like walking in simulation or by robots can be achieved through imitation learning. However, the walking speed is typically limited to a scale similar to the examples used for imitation. Achieving efficient and adaptable locomotion controllers for a wide range from walking to running is quite challenging. We propose a novel approach named adaptive imitated central pattern generators (AI-CPG) that combines central pattern generators (CPGs) and deep reinforcement learning (DRL) to enhance humanoid locomotion. Our method involves training a CPG-like controller through imitation learning, generating rhythmic feedforward activity patterns. DRL is not used for CPG parameter tuning; instead, it is applied in forming a reflex neural network, which can adjust feedforward patterns based on sensory feedback, enabling the stable body balancing to adapt to environmental or target velocity changes. Experiments with a 28-degree-of-freedom humanoid in a simulated environment demonstrated that our approach outperformed existing methods in terms of adaptability, balancing ability, and energy efficiency even for uneven surfaces. This study contributes to develop versatile humanoid locomotions in diverse environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI