计算机科学
睡眠阶段
人工智能
多导睡眠图
卷积神经网络
编码器
深度学习
睡眠(系统调用)
背景(考古学)
脑电图
模式识别(心理学)
机器学习
心理学
神经科学
生物
操作系统
古生物学
作者
Jiquan Wang,Sha Zhao,Haiteng Jiang,Yangxuan Zhou,Zhenghe Yu,Tao Li,Shijian Li,Gang Pan
标识
DOI:10.1109/jbhi.2024.3426939
摘要
Sleep staging is essential for sleep assessment and plays an important role in disease diagnosis, which refers to the classification of sleep epochs into different sleep stages. Polysomnography (PSG), consisting of many different physiological signals, e.g. electroencephalogram (EEG) and electrooculogram (EOG), is a gold standard for sleep staging. Although existing studies have achieved high performance on automatic sleep staging from PSG, there are still some limitations: 1) they focus on local features but ignore global features within each sleep epoch, and 2) they ignore cross-modality context relationship between EEG and EOG. In this paper, we propose CareSleepNet, a novel hybrid deep learning network for automatic sleep staging from PSG recordings. Specifically, we first design a multi-scale Convolutional-Transformer Epoch Encoder to encode both local salient wave features and global features within each sleep epoch. Then, we devise a Cross-Modality Context Encoder based on co-attention mechanism to model cross-modality context relationship between different modalities. Next, we use a Transformer-based Sequence Encoder to capture the sequential relationship among sleep epochs. Finally, the learned feature representations are fed into an epoch-level classifier to determine the sleep stages. We collected a private sleep dataset, SSND, and use two public datasets, Sleep-EDF-153 and ISRUC to evaluate the performance of CareSleepNet. The experiment results show that our CareSleepNet achieves the state-of-the-art performance on the three datasets. Moreover, we conduct ablation studies and attention visualizations to prove the effectiveness of each module and to analyze the influence of each modality.
科研通智能强力驱动
Strongly Powered by AbleSci AI