相关性(法律)
人工智能
人工神经网络
左束支阻滞
临床意义
深度学习
机器学习
计算机科学
模式识别(心理学)
数据集
内科学
医学
心力衰竭
政治学
法学
作者
Theresa Bender,Jacqueline Beinecke,Dagmar Krefting,Carolin Müller,Henning Dathe,Tim Seidler,Nicolai Spicher,Anne-Christin Hauschild
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:2
标识
DOI:10.48550/arxiv.2211.01738
摘要
Despite their remarkable performance, deep neural networks remain unadopted in clinical practice, which is considered to be partially due to their lack in explainability. In this work, we apply attribution methods to a pre-trained deep neural network (DNN) for 12-lead electrocardiography classification to open this "black box" and understand the relationship between model prediction and learned features. We classify data from a public data set and the attribution methods assign a "relevance score" to each sample of the classified signals. This allows analyzing what the network learned during training, for which we propose quantitative methods: average relevance scores over a) classes, b) leads, and c) average beats. The analyses of relevance scores for atrial fibrillation (AF) and left bundle branch block (LBBB) compared to healthy controls show that their mean values a) increase with higher classification probability and correspond to false classifications when around zero, and b) correspond to clinical recommendations regarding which lead to consider. Furthermore, c) visible P-waves and concordant T-waves result in clearly negative relevance scores in AF and LBBB classification, respectively. In summary, our analysis suggests that the DNN learned features similar to cardiology textbook knowledge.
科研通智能强力驱动
Strongly Powered by AbleSci AI