Abstract Proprioception and touch serve as complementary sensory modalities to coordinate hand kinematics and recognize users’ intent for precise interactions. However, current motion-tracking electronics remain bulky and insufficiently precise. Accurately decoding both is also challenging owing to the mechanical crosstalk of endogenous and exogenous deformations. Here, we report a hyperconformal dual-modal (HDM) metaskin for interactive hand motion interpretation. The metaskin integrates a strongly coupled hydrophilic interface with a two-step transfer strategy to minimize interfacial mechanical losses. The 10-μm-scale hyperconformal film is highly sensitive to intricate skin stretches while minimizing signal distortion. It accurately tracks skin stretches as well as touch locations and translates them into polar signals, which are individually salient. This approach enables a differentiable signaling topology within one single data channel without burdening structural complexity to the metaskin. When combined with temporal differential calculations and time-series machine learning network, the metaskin extracts interactive context and action cues from the low-dimensional data. This phenomenon is further exemplified through demonstrations in contextual navigation, typing and control integration, and multi-scenario object interaction. We demonstrate this fundamental approach in advanced skin-integrated electronics, highlighting its potential for instinctive interaction paradigms and paving the way for augmented somatosensation recognition.