模式
人机交互
人机交互
计算机科学
多通道交互
多模态
光学(聚焦)
模式治疗法
多模式学习
感知
桥(图论)
接口(物质)
机器人
人工智能
心理学
医学
内科学
社会科学
物理
气泡
神经科学
社会学
最大气泡压力法
万维网
并行计算
光学
心理治疗师
作者
Wei Tian,Pai Zheng,Shufei Li,Lihui Wang
标识
DOI:10.1002/aisy.202300359
摘要
Human–robot interaction (HRI) has escalated in notability in recent years, and multimodal communication and control strategies are necessitated to guarantee a secure, efficient, and intelligent HRI experience. In spite of the considerable focus on multimodal HRI, comprehensive disquisitions delineating various modalities and intricately analyzing their combinations remain elusive, consequently limiting holistic understanding and future advancements. This article aspires to bridge this inadequacy by conducting a profound exploration of multimodal HRI, predominantly concentrating on four principal modalities: vision, auditory and language, haptics, and physiological sensing. An extensive review encapsulating algorithmic dissection, interface devices, and applicative dimensions forms part of this discourse. This manuscript distinctively combines multimodal HRI with cognitive science, deeply probing into the three dimensions, perception, cognition, and action, thereby demystifying algorithms intrinsic to multimodal HRI. Finally, it accentuates the empirical challenges and contours preemptive trajectories for multimodal HRI in human‐centric smart manufacturing.
科研通智能强力驱动
Strongly Powered by AbleSci AI