计算机科学
蒸馏
人工智能
频道(广播)
睡眠(系统调用)
机器学习
模式识别(心理学)
空间分析
数据挖掘
数学
统计
计算机网络
操作系统
有机化学
化学
作者
Ziyu Jia,Haichao Wang,Yucheng Liu,Tianzi Jiang
标识
DOI:10.1145/3637528.3671981
摘要
Sleep stage classification has important clinical significance for the diagnosis of sleep-related diseases. To pursue more accurate sleep stage classification, multi-channel sleep signals are widely used due to the rich spatial-temporal information contained. However, it leads to a great increment in the size and computational costs, which constrain the application of multi-channel sleep models on hardware devices. Knowledge distillation is an effective way to compress models, yet existing knowledge distillation methods cannot fully extract and transfer the spatial-temporal knowledge in the multi-channel sleep signals. To solve the problem, we propose a general knowledge distillation framework for multi-channel sleep stage classification called spatial-temporal mutual distillation. Based on the spatial relationship of human body and the temporal transition rules of sleep signals, the spatial and temporal modules are designed to extract the spatial-temporal knowledge, thus help the lightweight student model learn the rich spatial-temporal knowledge from large-scale teacher model. The mutual distillation framework transfers the spatial-temporal knowledge mutually. Teacher model and student model can learn from each other, further improving the student model. The results on the ISRUC-III and MASS-SS3 datasets show that our proposed framework compresses the sleep models effectively with minimal performance loss and achieves the state-of-the-art performance compared to the baseline methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI