视觉伺服
人工智能
机器人
稳健性(进化)
计算机视觉
计算机科学
生物化学
基因
化学
作者
Ibrahim Abdulhafiz,Ali Nazari,Taha Abbasi-Hashemi,Amir Jalali,Kourosh Zareinia,Sajad Saeedi,Farrokh Janabi–Sharifi
标识
DOI:10.1109/case49997.2022.9926723
摘要
Vision-based control provides a significant potential for the end-point positioning of continuum robots under physical sensing limitations. Traditional visual servoing requires feature extraction and tracking followed by full or partial pose estimation, limiting the controller’s efficiency. We hypothesize that employing deep learning models and implementing direct visual servoing can effectively resolve the issue by eliminating such intermediate steps, enabling control of a continuum robot without requiring an exact system model. This paper presents the control of a single-section tendon-driven continuum robot using a modified VGG-16 deep learning network and an eye-in-hand direct visual servoing approach. The proposed algorithm is first developed in Blender software using only one input image of the target and then implemented on a real robot. The convergence and accuracy of the results in normal, shadowed, and occluded scenes demonstrate the effectiveness and robustness of the proposed controller.
科研通智能强力驱动
Strongly Powered by AbleSci AI