人工智能
计算机视觉
计算机科学
过程(计算)
软机器人
数字化
集合(抽象数据类型)
机器人
机器人学
人机交互
操作系统
程序设计语言
作者
Ning Guo,Xudong Han,Shuqiao Zhong,Zhiyuan Zhou,Jian Lin,Fang Wan,Chaoyang Song
标识
DOI:10.1002/aisy.202400022
摘要
Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision‐based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see‐through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi‐handles is modeled. This approach facilitates real‐time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data‐driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive R 2 scores of 0.9681, 0.9415, and 0.9541 along the x , y , and z axes. Furthermore, the robust performance of the proposed methods in touch‐based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch‐based digital twin interactions through vision‐based soft proprioception.
科研通智能强力驱动
Strongly Powered by AbleSci AI