期刊:Communications in computer and information science日期:2023-01-01卷期号:: 445-455
标识
DOI:10.1007/978-981-99-1642-9_38
摘要
Current advanced deep neural networks can greatly improve the performance of emotion recognition tasks in affective Brain-Computer Interfaces (aBCI). Basic human emotions could be induced and electroencephalographic (EEG) signals could be simultaneously recorded. While data of basic common emotions are easier to collect, some complex emotions are low resource in terms of data size and label quality in real life, which would limit the utility of EEG-based emotion recognition models. To enhance the model adaptive capacity of new emotions with few samples, we introduce a few-shot class-incremental deep learning model for emotion recognition. The proposed model consists of a graph convolutional networks (GCN) and a linear classifier. By training the whole network on a base set in a preliminary stage, and fine-tuning the parameters of the linear classifier with very few shots of labeled samples, the model can incrementally learn new types of emotions while preserving knowledge of the old ones. Our experimental results on the SEED-V dataset show that even with very limited new class samples, the fine-tuned pre-trained model could have a fairly good performance on the test set with more emotion classes.