可解释性
计算机科学
深度学习
情态动词
变压器
人工智能
机器学习
卷积神经网络
人工神经网络
工程类
电压
化学
高分子化学
电气工程
作者
Jathurshan Pradeepkumar,Mithunjha Anandakumar,Vinith Kugathasan,Dhinesh Suntharalingham,Simon L. Kappel,Anjula De Silva,Chamira U. S. Edussooriya
标识
DOI:10.1109/tnsre.2024.3438610
摘要
Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed, and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. The performance of our method is on-par with the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.
科研通智能强力驱动
Strongly Powered by AbleSci AI