透视图(图形)
脑电图
计算机科学
人工智能
语音识别
情绪识别
模式识别(心理学)
心理学
神经科学
作者
Huan Liu,Tianyu Lou,Yuzhe Zhang,Yixiao Wu,Yang Xiao,Christian S. Jensen,Dalin Zhang
标识
DOI:10.1109/tim.2024.3369130
摘要
Emotion, a fundamental trait of human beings, plays a pivotal role in shaping aspects of our lives, including our cognitive and perceptual abilities. Hence, emotion recognition also is central to human communication, decision-making, learning, and other activities. Emotion recognition from electroencephalography (EEG) signals has garnered substantial attention due to advantages such as noninvasiveness, high speed, and high temporal resolution; driven also by the complementarity between EEG and other physiological signals at revealing emotions, recent years have seen a surge in proposals for EEG-based multimodal emotion recognition (EMER). In short, EEG-based emotion recognition is a promising technology in medical measurements and health monitoring. While reviews exist, which explore emotion recognition from multimodal physiological signals, they focus mostly on general combinations of modalities and do not emphasize studies that center on EEG as the fundamental modality. Furthermore, existing reviews take a methodology-agnostic perspective, primarily concentrating on the biomedical basis or experimental paradigms, thereby giving little attention to the methodological characteristics unique to this field. To address these gaps, we present a comprehensive review of current EMER studies, with a focus on multimodal machine learning models. The review is structured around three key aspects: multimodal feature representation learning, multimodal physiological signal fusion, and incomplete multimodal learning models. In doing so, the review sheds light on the advances and challenges in the field of EMER, thus offering researchers who are new to the field a holistic understanding. The review also aims to provide valuable insight that may guide new research in this exciting and rapidly evolving field.
科研通智能强力驱动
Strongly Powered by AbleSci AI