模态(人机交互)
瓶颈
模式
计算机科学
融合
人工智能
编码(集合论)
感知
机器学习
集合(抽象数据类型)
心理学
哲学
社会学
嵌入式系统
神经科学
程序设计语言
语言学
社会科学
作者
Arsha Nagrani,Shan Yang,Anurag Arnab,Aren Jansen,Cordelia Schmid,Chen Sun
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:212
标识
DOI:10.48550/arxiv.2107.00135
摘要
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
科研通智能强力驱动
Strongly Powered by AbleSci AI