脑电图
计算机科学
对偶(语法数字)
频道(广播)
语音识别
电信
神经科学
心理学
文学类
艺术
作者
Jun Lu,Ruihan Cai,Zhichao Guo,Qiyu Yang,Kan Xie,Shengli Xie
标识
DOI:10.1109/jiot.2024.3368333
摘要
Long-term and mobile healthcare applications have increased the use of single-channel electroencephalogram (EEG) systems. However, electromyography (EMG) artifacts often disturb EEGs. The lack of spatial correlation, diversity of waveforms, and time-varying overlap make eliminating EMG interference from a single-channel EEG difficult. To overcome these challenges, we create DSATCN, a dual-stream learning model that makes use of multi-level and multi-scale temporal dependencies in different frequency bands to perform robust EEG reconstruction. The first DSATCN stream extracts low-frequency band EEG features with reduced EMG interference. The second stream selectively combines the high-level features of the first stream with its own low-level features to refine the EEG reconstruction across the entire frequency band, lowering the risk of overfitting. Both streams employ a novel attention-based temporal convolution network (ATCN) to adaptively separate the overlapping features of EEGs and EMGs. The ATCN has multiple stages to represent various temporal dependencies at different levels. Each stage consists of multi-scale dilated convolutions and fast Fourier transform modulations, which efficiently enrich the receptive fields and establish global self-attention mechanisms. The stages' outputs are merged by relaxed attentional feature fusion modules, which bridge semantic gaps between features at various levels. Extensive experimental results on three semi-simulated datasets containing 318,700 samples show that the proposed model significantly outperforms the existing methods in EEG reconstruction accuracy. And its computational cost meets the criteria for real-time processing. Our code is available at https://github.com/BaenRH/DSATCN.
科研通智能强力驱动
Strongly Powered by AbleSci AI