脑-机接口
计算机科学
运动表象
接口(物质)
时频分析
计算机视觉
人工智能
计算机图形学(图像)
人机交互
脑电图
神经科学
心理学
滤波器(信号处理)
最大气泡压力法
气泡
并行计算
作者
Guoyang Liu,Rui Zhang,Tian Lan,Weidong Zhou
标识
DOI:10.1109/jbhi.2025.3536212
摘要
The Motor Imagery Brain-Computer Interfaces (MI-BCIs) have shown considerable promise for applications in neural rehabilitation. However, improving the practicality and interpretability of MI-BCIs remains a critical challenge. Unlike previous methods that focus generally on either spatial, frequency, or temporal domains with coarse-grained segmentation schemes, this study proposes a novel fine-grained spatial-frequency-time (FGSFT) framework, aiming to enhance the efficiency and reliability of MI-BCIs. Multi-channel MI EEG recordings are firstly processed through multiscale time-frequency segmentation and spatial segmentation schemes, yielding fine-grained spatial-frequency-time segments (SFTSs). The key SFTSs are then selected with a tailored wrapper-based feature selection approach. Discriminative MI EEG features are extracted using a divergence-based common spatial pattern algorithm with intra-class regularization and classified using an efficient linear support vector machine (SVM). The proposed framework was evaluated on the BCI IV IIa and SDU-MI datasets, demonstrating state-of-the-art performance in terms of information transfer rate (ITR) Meanwhile, the proposed spatial segmentation strategy can significantly improve the performance of MI-BCIs when using a larger number of electrodes. Additionally, the fine-grained Motor Imagery Time-Frequency Reaction Map (MI-TFRM) and time-frequency topographical map can be obtained with the proposed framework enabling visualization of the subject-specific dynamic neural process during motor imagery tasks, facilitating the devising of personalized MI-BCIs. The FGSFT framework significantly advances the accuracy, ITR, and interoperability of MI-BCIs, paving the way for future neuroscientific research and clinical applications in neural rehabilitation and assistive technologies.
科研通智能强力驱动
Strongly Powered by AbleSci AI