计算机科学
人工智能
RGB颜色模型
图形
模式识别(心理学)
运动捕捉
理论计算机科学
算法
运动(物理)
作者
Yisheng Zhu,Hui Shuai,Guangcan Liu,Qingshan Liu
标识
DOI:10.1109/tip.2022.3230249
摘要
The ability to capture joint connections in complicated motion is essential for skeleton-based action recognition. However, earlier approaches may not be able to fully explore this connection in either the spatial or temporal dimension due to fixed or single-level topological structures and insufficient temporal modeling. In this paper, we propose a novel multilevel spatial-temporal excited graph network (ML-STGNet) to address the above problems. In the spatial configuration, we decouple the learning of the human skeleton into general and individual graphs by designing a multilevel graph convolution (ML-GCN) network and a spatial data-driven excitation (SDE) module, respectively. ML-GCN leverages joint-level, part-level, and body-level graphs to comprehensively model the hierarchical relations of a human body. Based on this, SDE is further introduced to handle the diverse joint relations of different samples in a data-dependent way. This decoupling approach not only increases the flexibility of the model for graph construction but also enables the generality to adapt to various data samples. In the temporal configuration, we apply the concept of temporal difference to the human skeleton and design an efficient temporal motion excitation (TME) module to highlight the motion-sensitive features. Furthermore, a simplified multiscale temporal convolution (MS-TCN) network is introduced to enrich the expression ability of temporal features. Extensive experiments on the four popular datasets NTU-RGB+D, NTU-RGB+D 120, Kinetics Skeleton 400, and Toyota Smarthome demonstrate that ML-STGNet gains considerable improvements over the existing state of the art.
科研通智能强力驱动
Strongly Powered by AbleSci AI