强化学习
任务(项目管理)
计算机科学
动作(物理)
机器人
人工智能
机器人学习
人机交互
语义学(计算机科学)
参数化复杂度
机器学习
移动机器人
程序设计语言
工程类
算法
物理
系统工程
量子力学
作者
Hao Wang,Hao Zhang,Lin Li,Zhen Kan,Yongduan Song
标识
DOI:10.1109/tcyb.2023.3298195
摘要
It is an interesting open problem to enable robots to efficiently and effectively learn long-horizon manipulation skills. Motivated to augment robot learning via more effective exploration, this work develops task-driven reinforcement learning with action primitives (TRAPs), a new manipulation skill learning framework that augments standard reinforcement learning algorithms with formal methods and parameterized action space (PAS). In particular, TRAPs uses linear temporal logic (LTL) to specify complex manipulation skills. LTL progression, a semantics-preserving rewriting operation, is then used to decompose the training task at an abstract level, informs the robot about their current task progress, and guides them via reward functions. The PAS, a predefined library of heterogeneous action primitives, further improves the efficiency of robot exploration. We highlight that TRAPs augments the learning of manipulation skills in both learning efficiency and effectiveness (i.e., task constraints). Extensive empirical studies demonstrate that TRAPs outperforms most existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI