模式
计算机科学
人工智能
稳健性(进化)
五大性格特征
模态(人机交互)
情态动词
人格心理学
人格
特征提取
模式识别(心理学)
特征(语言学)
水准点(测量)
机器学习
心理学
社会心理学
哲学
社会学
基因
化学
生物化学
语言学
高分子化学
地理
社会科学
大地测量学
作者
Yusong Wang,Dongyuan Li,Kotaro Funakoshi,Manabu Okumura
标识
DOI:10.1145/3591106.3592243
摘要
Multi-modal personality traits recognition aims to recognize personality traits precisely by utilizing different modality information, which has received increasing attention for its potential applications in human-computer interaction. Current methods almost fail to extract distinguishable features, remove noise, and align features from different modalities, which dramatically affects the accuracy of personality traits recognition. To deal with these issues, we propose an emotion-guided multi-modal fusion and contrastive learning framework for personality traits recognition. Specifically, we first use supervised contrastive learning to extract deeper and more distinguishable features from different modalities. After that, considering the close correlation between emotions and personalities, we use an emotion-guided multi-modal fusion mechanism to guide the feature fusion, which eliminates the noise and aligns the features from different modalities. Finally, we use an auto-fusion structure to enhance the interaction between different modalities to further extract essential features for final personality traits recognition. Extensive experiments on two benchmark datasets indicate that our method achieves state-of-the-art performance and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI