视觉伺服
机器人
人工智能
移动机器人
对象(语法)
计算机科学
计算机视觉
人机交互
人机交互
机器人控制
领域(数学)
控制工程
工程类
数学
纯数学
作者
Sichao Liu,Jianjing Zhang,Lihui Wang,Robert X. Gao
出处
期刊:CIRP Annals
[Elsevier]
日期:2024-01-01
卷期号:73 (1): 13-16
被引量:32
标识
DOI:10.1016/j.cirp.2024.03.004
摘要
Autonomous robots that understand human instructions can significantly enhance the efficiency in human-robot assembly operations where robotic support is needed to handle unknown objects and/or provide on-demand assistance. This paper introduces a vision AI-based method for human-robot collaborative (HRC) assembly, enabled by a large language model (LLM). Upon 3D object reconstruction and pose establishment through neural object field modelling, a visual servoing-based mobile robotic system performs object manipulation and navigation guidance to a mobile robot. The LLM model provides text-based logic reasoning and high-level control command generation for natural human-robot interactions. The effectiveness of the presented method is experimentally demonstrated.
科研通智能强力驱动
Strongly Powered by AbleSci AI