脑电图
卷积神经网络
计算机科学
解码方法
人工智能
神经解码
语音识别
模式识别(心理学)
神经科学
心理学
算法
作者
Ke Liu,Xin Xing,Tao Yang,Zhuliang Yu,Bin Xiao,Guoyin Wang,Wei Wu
标识
DOI:10.1109/jbhi.2025.3546288
摘要
Accurate decoding of electroencephalogram (EEG) signals has become more significant for the brain-computer interface (BCI). Specifically, motor imagery and motor execution (MI/ME) tasks enable the control of external devices by decoding EEG signals during imagined or real movements. However, accurately decoding MI/ME signals remains a challenge due to the limited utilization of temporal information and ineffective feature selection methods. This paper introduces DMSACNN, an end-to-end deep multiscale attention convolutional neural network for MI/ME-EEG decoding. DMSACNN incorporates a deep multiscale temporal feature extraction module to capture temporal features at various levels. These features are then processed by a spatial convolutional module to extract spatial features. Finally, a local and global feature fusion attention module is utilized to combine local and global information and extract the most discriminative spatiotemporal features. DMSACNN achieves impressive accuracies of 78.20%, 96.34% and 70.90% for hold-out analysis on the BCI-IV-2a, High Gamma and OpenBMI datasets, respectively, outperforming most of the state-of-the-art methods. These results highlight the potential of DMSACNN in robust BCI applications. Our proposed method provides a valuable solution to improve the accuracy of the MI/ME-EEG decoding, which can pave the way for more efficient and reliable BCI systems. The source code for DMSACNN is available at https://github.com/xingxin-99/DMSANet.git.
科研通智能强力驱动
Strongly Powered by AbleSci AI