唤醒
积极倾听
情绪识别
脑电图
音乐剧
语音识别
模态(人机交互)
计算机科学
价(化学)
情感配价
心理学
多模态
认知心理学
人工智能
沟通
认知
社会心理学
艺术
视觉艺术
万维网
神经科学
物理
精神科
量子力学
作者
Nattapong Thammasan,Ken–ichi Fukui,Masayuki Numao
出处
期刊:Cornell University - arXiv
日期:2016-01-01
被引量:3
标识
DOI:10.48550/arxiv.1611.10120
摘要
Emotion estimation in music listening is confronting challenges to capture the emotion variation of listeners. Recent years have witnessed attempts to exploit multimodality fusing information from musical contents and physiological signals captured from listeners to improve the performance of emotion recognition. In this paper, we present a study of fusion of signals of electroencephalogram (EEG), a tool to capture brainwaves at a high-temporal resolution, and musical features at decision level in recognizing the time-varying binary classes of arousal and valence. Our empirical results showed that the fusion could outperform the performance of emotion recognition using only EEG modality that was suffered from inter-subject variability, and this suggested the promise of multimodal fusion in improving the accuracy of music-emotion recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI