计算机科学
人工智能
稳健性(进化)
空间分析
动作识别
可扩展性
模式
人类行为
计算机视觉
机器学习
数学
社会科学
生物化学
化学
统计
数据库
社会学
基因
班级(哲学)
作者
Fangtai Guo,Tianlei Jin,Shiqiang Zhu,Xiangming Xi,Wen Wang,Qiwei Meng,Wei Song,Jiakai Zhu
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:32: 4989-5003
被引量:2
标识
DOI:10.1109/tip.2023.3308750
摘要
Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses’ spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available here 1 .
科研通智能强力驱动
Strongly Powered by AbleSci AI