姿势
人工智能
三维姿态估计
计算机科学
点云
关节式人体姿态估计
计算机视觉
强化学习
稳健性(进化)
对象(语法)
机器人学
运动学
视觉对象识别的认知神经科学
机器学习
机器人
生物化学
化学
物理
经典力学
基因
作者
Liu Liu,Jianming Du,Hao Wu,Xun Yang,Zhenguang Liu,Richang Hong,Meng Wang
标识
DOI:10.1145/3581783.3611852
摘要
Human life is populated with articulated objects. Current category-level articulated object 9D pose estimation (Articulated Object 9D Pose Estimation, ArtOPE) methods usually meet the challenges of shared object representation requirement, kinematics-agnostic pose modeling and self-occlusions. In this paper, we propose a novel framework called Articulated object 9D Pose Estimation via Reinforcement Learning (ArtPERL), which formulates the category-level ArtOPE as a reinforcement learning problem. Given a point cloud or RGB-D image input, ArtPERL firstly retrieves the part-sensitive articulated object as reference point cloud, and then introduces a joint-centric pose modeling strategy that estimates 9D pose by fitting joint states via reinforced agent training. Finally, we further propose a pose optimization that refine the predicted 9D pose considering kinematic constraints. We evaluate our ArtPERL on various datasets ranging from synthetic point cloud to real-world multi-hinged object. Experiments demonstrate the superior performance and robustness of our ArtPERL. Our work provides a new perspective on category-level articulated object 9D pose estimation and has the potential to be applied in many fields, including robotics, augmented reality, and autonomous driving.
科研通智能强力驱动
Strongly Powered by AbleSci AI