自编码
人工智能
计算机科学
脑电图
模式识别(心理学)
卷积神经网络
深度学习
编码器
语音识别
特征提取
特征学习
变压器
机器学习
监督学习
循环神经网络
人工神经网络
心理学
工程类
神经科学
电压
电气工程
操作系统
作者
Ruimin Peng,Changming Zhao,Yifan Xu,Jiang Jun,Guangtao Kuang,Jianbo Shao,Dongrui Wu
标识
DOI:10.1109/icassp49357.2023.10097183
摘要
Electroencephalogram (EEG) based seizure subtype classification plays an important role in clinical diagnostics. However, existing deep learning approaches face two challenges in such applications: 1) convolutional or recurrent neural network based models have difficulty learning long-term dependencies; and, 2) there are not enough labeled seizure sub-type data for training such models. This paper proposes a Transformer-based self-supervised learning model for EEG-based seizure subtype classification, which copes well with these two challenges. Filter bank analysis is first employed to improve Vision Transformer as a Wavelet Transformer (WaT) encoder, which generates multi-grained feature representations of EEG signals. Then, self-supervised learning is used to pre-train WaT from unlabeled EEG data. Experiments on two public datasets demonstrated that Wavelet2Vec outperformed several other supervised and self-supervised models in cross-subject seizure subtype classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI