计算机科学
模态(人机交互)
卷积神经网络
人工智能
特征(语言学)
模式识别(心理学)
卷积(计算机科学)
融合
RGB颜色模型
深度学习
模式
人工神经网络
哲学
语言学
社会科学
社会学
作者
Hamid Reza Vaezi Joze,Amirreza Shaban,Michael L. Iuzzolino,Kazuhito Koishida
标识
DOI:10.1109/cvpr42600.2020.01330
摘要
In late fusion, each modality is processed in a separate unimodal Convolutional Neural Network (CNN) stream and the scores of each modality are fused at the end. Due to its simplicity, late fusion is still the predominant approach in many state-of-the-art multimodal applications. In this paper, we present a simple neural network module for leveraging the knowledge from multiple modalities in convolutional neural networks. The proposed unit, named Multimodal Transfer Module (MMTM), can be added at different levels of the feature hierarchy, enabling slow modality fusion. Using squeeze and excitation operations, MMTM utilizes the knowledge of multiple modalities to recalibrate the channel-wise features in each CNN stream. Unlike other intermediate fusion methods, the proposed module could be used for feature modality fusion in convolution layers with different spatial dimensions. Another advantage of the proposed method is that it could be added among unimodal branches with minimum changes in the their network architectures, allowing each branch to be initialized with existing pretrained weights. Experimental results show that our framework improves the recognition accuracy of well-known multimodal networks. We demonstrate state-of-the-art or competitive performance on four datasets that span the task domains of dynamic hand gesture recognition, speech enhancement, and action recognition with RGB and body joints.
科研通智能强力驱动
Strongly Powered by AbleSci AI