计算机科学
光谱图
语音识别
特征提取
卷积神经网络
人工智能
深度学习
模式
情绪识别
特征(语言学)
视听
模式识别(心理学)
多媒体
社会学
哲学
语言学
社会科学
作者
Tahir Hussain,W. Wang,Nidhal Bouaynaya,Hassan M. Fathallah‐Shaykh,Lyudmila Mihaylova
标识
DOI:10.23919/fusion49751.2022.9841342
摘要
Human emotions can be presented in data with multiple modalities, e.g. video, audio and text. An automated system for emotion recognition needs to consider a number of challenging issues, including feature extraction, and dealing with variations and noise in data. Deep learning have been extensively used recently, offering excellent performance in emotion recognition. This work presents a new method based on audio and visual modalities, where visual cues facilitate the detection of the speech or non-speech frames and the emotional state of the speaker. Different from previous works, we propose the use of novel speech features, e.g. the Wavegram, which is extracted with a one-dimensional Convolutional Neural Network (CNN) learned directly from time-domain waveforms, and Wavegram-Logmel features which combines the Wavegram with the log mel spectrogram. The system is then trained in an end-to-end fashion on the SAVEE database by also taking advantage of the correlations among each of the streams. It is shown that the proposed approach outperforms the traditional and state-of-the art deep learning based approaches, built separately on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions.
科研通智能强力驱动
Strongly Powered by AbleSci AI