计算机科学
脑电图
人工智能
特征(语言学)
相似性(几何)
集合(抽象数据类型)
模式识别(心理学)
度量(数据仓库)
数据集
数据挖掘
图像(数学)
心理学
语言学
哲学
精神科
程序设计语言
作者
Rongrong Fu,Yaodong Wang,Chengcheng Jia
标识
DOI:10.1016/j.eswa.2022.117386
摘要
• A novel hybrid model of broad-deep networks is proposed for data augmentation. • The highly similar features can be merged to generate new augmented features. • The method solves the problem of insufficient authenticity of EEG generated by GAN. Decades after data augmentation was first proposed in brain-computer interface (BCI), the authenticity and performance still do not meet rational requirements, which is directly related to the fact that the augmentation methods do not provide real electroencephalograph (EEG) trials. Here we show how to generate a numerous authentic EEG from the original calibrated EEG by using a novel hybrid model of broad-deep networks, which eliminates the lack of authenticity of data generated by GAN and other methods. First, we design an EEG evoked experiment with a complex boundary avoidance task to collect the EEG of different subjects. This experiment can effectively highlight the differences of EEG features of different subjects that makes the results more reliable when using our novel hybrid model to measure the similarity. A new hybrid model of broad-deep networks is proposed to measure the similarity of different subjects in this study. And the EEG features of the two subjects with the highest similarity are combined to generate an augmented feature set. On the condition of satisfying the authenticity of EEG, the augmented feature set is significantly better than the original feature set in data dimension and quality. Finally, we verify the classification effect of the augmented feature set, and the results show that the proposed method can effectively generate real EEG data and improve the classification performance to a high reliability level for complex boundary avoidance tasks under limited EEG conditions. In addition, we observe the obvious advantages of this model over traditional deep learning methods in terms of training time and memory overhead.
科研通智能强力驱动
Strongly Powered by AbleSci AI