脑电图
计算机科学
睡眠(系统调用)
人工智能
语音识别
睡眠阶段
心理学
多导睡眠图
神经科学
操作系统
作者
Tianyou Yu,Xinxin Hu,Yanbin He,Wei Wu,Zhenghui Gu,Zhuliang Yu,Yuanqing Li,Fei Wang,Jun Xiao
标识
DOI:10.1109/tbme.2025.3561228
摘要
Deep learning-based methods for automatic sleep staging offer an efficient and objective alternative to costly manual scoring. However, their reliance on extensive labeled datasets and the challenge of generalization to new subjects and datasets limit their widespread adoption. Self-supervised learning (SSL) has emerged as a promising solution to address these issues by learning transferable representations from unlabeled data. This study highlights the effectiveness of SSL in automated sleep staging, utilizing a customized SSL approach to train a multi-view sleep staging model. This model includes a temporal view feature encoder for raw EEG signals and a spectral view feature encoder for time-frequency features. During pretraining, we incorporate a cross-view contrastive loss in addition to a contrastive loss for each view to learn complementary features and ensure consistency between views, enhancing the transferability and robustness of learned features. A dynamic weighting algorithm balances the learning speed of different loss components. Subsequently, these feature encoders, combined with a sequence encoder and a linear classifier, enable sleep staging after finetuning with labeled data. Evaluation on three publicly available datasets demonstrates that finetuning the entire SSL-pretrained model achieves competitive accuracy with state-of-the-art methods-86.4%, 83.8%, and 85.5% on SleepEDF-20, SleepEDF-78, and MASS datasets, respectively. Notably, our framework achieves near-equivalent performance with only 5% of the labeled data compared to full-label supervised training, showcasing SSL's potential to enhance automated sleep staging efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI