计算机科学
人机交互
运动学
背景(考古学)
接口(物质)
人工智能
解码方法
拓扑(电路)
感觉系统
概念证明
模式
串扰
运动(物理)
机器人
功能可见性
对象(语法)
模态(人机交互)
形式主义(音乐)
多点触控
信号(编程语言)
可微函数
计算
三维旋转形式
视觉对象识别的认知神经科学
刺激形态
透视图(图形)
作者
Shifan Yu,Zhenzhou Ji,Lei Liu,Zijian Huang,Yan-Hao Luo,Huasen Wang,Ruize Wangyuan,Ziquan Guo,Zhong Chen,Qingliang Liao,Yuanjin Zheng,Xinqin Liao,Shifan Yu,Zhenzhou Ji,Lei Liu,Zijian Huang,Yan-Hao Luo,Huasen Wang,Ruize Wangyuan,Ziquan Guo
标识
DOI:10.1038/s41467-025-65624-z
摘要
Abstract Proprioception and touch serve as complementary sensory modalities to coordinate hand kinematics and recognize users’ intent for precise interactions. However, current motion-tracking electronics remain bulky and insufficiently precise. Accurately decoding both is also challenging owing to the mechanical crosstalk of endogenous and exogenous deformations. Here, we report a hyperconformal dual-modal (HDM) metaskin for interactive hand motion interpretation. The metaskin integrates a strongly coupled hydrophilic interface with a two-step transfer strategy to minimize interfacial mechanical losses. The 10-μm-scale hyperconformal film is highly sensitive to intricate skin stretches while minimizing signal distortion. It accurately tracks skin stretches as well as touch locations and translates them into polar signals, which are individually salient. This approach enables a differentiable signaling topology within one single data channel without burdening structural complexity to the metaskin. When combined with temporal differential calculations and time-series machine learning network, the metaskin extracts interactive context and action cues from the low-dimensional data. This phenomenon is further exemplified through demonstrations in contextual navigation, typing and control integration, and multi-scenario object interaction. We demonstrate this fundamental approach in advanced skin-integrated electronics, highlighting its potential for instinctive interaction paradigms and paving the way for augmented somatosensation recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI