职位(财务)
控制(管理)
控制理论(社会学)
计算机科学
政治学
人工智能
经济
财务
作者
Peiyuan Zhi,Peiyang Li,Jianqin Yin,Baoxiong Jia,Siyuan Huang
出处
期刊:Cornell University - arXiv
日期:2025-05-27
标识
DOI:10.48550/arxiv.2505.20829
摘要
Robotic loco-manipulation tasks often involve contact-rich interactions with the environment, requiring the joint modeling of contact force and robot position. However, recent visuomotor policies often focus solely on learning position or force control, overlooking their co-learning. In this work, we propose the first unified policy for legged robots that jointly models force and position control learned without reliance on force sensors. By simulating diverse combinations of position and force commands alongside external disturbance forces, we use reinforcement learning to learn a policy that estimates forces from historical robot states and compensates for them through position and velocity adjustments. This policy enables a wide range of manipulation behaviors under varying force and position inputs, including position tracking, force application, force tracking, and compliant interactions. Furthermore, we demonstrate that the learned policy enhances trajectory-based imitation learning pipelines by incorporating essential contact information through its force estimation module, achieving approximately 39.5% higher success rates across four challenging contact-rich manipulation tasks compared to position-control policies. Extensive experiments on both a quadrupedal manipulator and a humanoid robot validate the versatility and robustness of the proposed policy across diverse scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI