计算机科学
脑电图
解码方法
脑-机接口
运动表象
人工智能
模式识别(心理学)
特征(语言学)
特征提取
融合机制
语音识别
融合
哲学
精神科
脂质双层融合
电信
语言学
心理学
作者
Dongrui Gao,Wen Yang,Pengrui Li,Shihong Liu,Tiejun Liu,Manqing Wang,Yongqing Zhang
标识
DOI:10.1016/j.asoc.2023.111129
摘要
The decoding of motor imagery (MI) electroencephalogram (EEG) is an essential component of the brain–computer interface (BCI), which can help patients with motor impairment communicate directly with the outside world through assistive devices. The key to motor imagery electroencephalogram (MI-EEG) classification is to extract multiple temporal, spatial, and spectral features, to obtain more comprehensive and representative information. However, current deep learning methods must fully consider the depth of temporal features and multi-spectral knowledge in EEG and often ignore the temporal or spectrum dependence in MI-EEG. In addition, the lack of effective feature fusion methods can lead to information redundancy, which affects decoding performance. To solve the above problems, this paper proposes a novel MI-EEG decoding method, named multi-scale feature fusion network based on attention mechanism (MSFF-SENet). Firstly, the multi-scale spatio-temporal module (MS-STM) and the multi-scale temporal module (MSTM) extract spatial and high-dimensional temporal features from the original signal. Then, the power spectral density estimation (PSD) convolution module (PSD-Conv module) acquires the multi-spectral features of the MI-EEG signal. Secondly, the feature fusion module fuses spatio-temporal and multi-spectral features to generate integrated feature mappings and establish the dependencies between different features. Finally, we conducted a visual analysis of the results, explaining the neural activity patterns of various motor imagery tasks in different frequency ranges and revealing the potential relationship between body movement and changes related to brain activity. The experimental results show that the classification accuracy of this model in BCI Competition IV 2a (BCI IV 2a) and High Gamma (HGD) datasets is 85.37% and 96.60%, respectively, which is superior to the most advanced methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI