计算机科学
模态(人机交互)
模式
情绪分析
治疗方式
变压器
人工智能
自然语言处理
语音识别
医学
社会学
社会科学
量子力学
物理
外科
电压
作者
Kyeonghun Kim,Sanghyun Park
标识
DOI:10.1016/j.inffus.2022.11.022
摘要
Multimodal sentiment analysis utilizes various modalities such as Text, Vision and Speech to predict sentiment. As these modalities have unique characteristics, methods have been developed for fusing features. However, the overall modality characteristics are not guaranteed, because traditional fusion methods have some loss of intra-modality and inter-modality. To solve this problem, we introduce a single-stream transformer, All-modalities-in-One BERT (AOBERT). The model is pre-trained on two tasks simultaneously: Multimodal Masked Language Modeling (MMLM) and Alignment Prediction (AP). The dependency and relationship between modalities can be determined using two pre-training tasks. AOBERT achieved state-of-the-art results on the CMU-MOSI, CMU-MOSEI, and UR-FUNNY datasets. Furthermore, ablation studies that validated combinations of modalities, effects of MMLM and AP and fusion methods confirmed the effectiveness of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI