计算机科学
模态(人机交互)
RGB颜色模型
人工智能
背景(考古学)
模式
杠杆(统计)
动作识别
模式识别(心理学)
语音识别
班级(哲学)
古生物学
社会科学
社会学
生物
作者
Sumin Lee,Sangmin Woo,Yeonju Park,Muhammad Adi Nugroho,Changick Kim
标识
DOI:10.1109/wacv56688.2023.00331
摘要
In multi-modal action recognition, it is important to consider not only the complementary nature of different modalities but also global action content. In this paper, we propose a novel network, named Modality Mixer (M-Mixer) network, to leverage complementary information across modalities and temporal context of an action for multi-modal action recognition. We also introduce a simple yet effective recurrent unit, called Multi-modal Contextualization Unit (MCU), which is a core component of M-Mixer. Our MCU temporally encodes a sequence of one modality (e.g., RGB) with action content features of other modalities (e.g., depth, IR). This process encourages M-Mixer to exploit global action content and also to supplement complementary information of other modalities. As a result, our proposed method outperforms state-of-the-art methods on NTU RGB+D 60, NTU RGB+D 120, and NW-UCLA datasets. Moreover, we demonstrate the effectiveness of M-Mixer by conducting comprehensive ablation studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI