计算机科学
语音识别
光谱图
人工智能
转化(遗传学)
集合(抽象数据类型)
块(置换群论)
特征(语言学)
卷积(计算机科学)
代表(政治)
信号(编程语言)
模式识别(心理学)
数学
人工神经网络
法学
哲学
程序设计语言
化学
几何学
基因
政治
生物化学
语言学
政治学
作者
Mingyue Niu,Jianhua Tao,Yongwei Li,Yong Qin,Ya Li
标识
DOI:10.1109/taffc.2023.3272553
摘要
Physiological reports have confirmed that there are differences in speech signals between depressed and healthy individuals. Therefore, as an application in the field of affective computing, automatic depression level prediction through speech signals has received the attention of researchers, which often estimate the depression severity of individuals by the Fourier or Mel spectrograms of speech signals. However, some studies on speech emotion recognition suggest that directly modeling the raw speech signal is more helpful for extracting emotion-related information. Inspired by this fact, we develop a WavDepressionNet to model raw speech signals for the improvement of prediction accuracy. In our method, a representation block is proposed to find a set of basis vectors to construct the optimal transformation space and generate the transformation result (named Depression Feature Map, DFM) of speech signal for facilitating the perception of depression cues. We further propose an assessment block, which cannot only use the designed spatiotemporal self-calibration mechanism to calibrate the DFM and highlight the useful elements, but also aggregates the calibrated DFM across various temporal ranges with the dilated convolution. Experimental results on the AVEC 2013 and AVEC 2014 depression databases demonstrate the effectiveness of our approach over previous works.
科研通智能强力驱动
Strongly Powered by AbleSci AI