计算机科学
学习迁移
人工智能
机器学习
稳健性(进化)
运动表象
任务(项目管理)
多任务学习
模式识别(心理学)
脑电图
脑-机接口
精神科
基因
生物化学
经济
化学
管理
心理学
作者
Yuting Xie,Kun Wang,Jiayuan Meng,Yue Jin,Lin Meng,Weibo Yi,Tzyy‐Ping Jung,Minpeng Xu,Ming Dong
标识
DOI:10.1088/1741-2552/acfe9c
摘要
Objective.Deep learning (DL) models have been proven to be effective in decoding motor imagery (MI) signals in Electroencephalogram (EEG) data. However, DL models' success relies heavily on large amounts of training data, whereas EEG data collection is laborious and time-consuming. Recently, cross-dataset transfer learning has emerged as a promising approach to meet the data requirements of DL models. Nevertheless, transferring knowledge across datasets involving different MI tasks remains a significant challenge in cross-dataset transfer learning, limiting the full utilization of valuable data resources.This study proposes a pre-training-based cross-dataset transfer learning method inspired by Hard Parameter Sharing in multi-task learning. Different datasets with distinct MI paradigms are considered as different tasks, classified with shared feature extraction layers and individual task-specific layers to allow cross-dataset classification with one unified model. Then, Pre-training and fine-tuning are employed to transfer knowledge across datasets. We also designed four fine-tuning schemes and conducted extensive experiments on them.The results showed that compared to models without pre-training, models with pre-training achieved a maximum increase in accuracy of 7.76%. Moreover, when limited training data were available, the pre-training method significantly improved DL model's accuracy by 27.34% at most. The experiments also revealed that pre-trained models exhibit faster convergence and remarkable robustness. The training time per subject could be reduced by up to 102.83 s, and the variance of classification accuracy decreased by 75.22% at best.This study represents the first comprehensive investigation of the cross-dataset transfer learning method between two datasets with different MI tasks. The proposed pre-training method requires only minimal fine-tuning data when applying DL models to new MI paradigms, making MI-Brain-computer interface more practical and user-friendly.
科研通智能强力驱动
Strongly Powered by AbleSci AI