可解释性
人工智能
计算机科学
卷积神经网络
机器学习
深度学习
特征(语言学)
人工神经网络
排列(音乐)
模式识别(心理学)
数据挖掘
哲学
语言学
物理
声学
作者
Radhouane Hammachi,Noureddine Messaoudi,Sebti Belkacem
标识
DOI:10.1109/icateee57445.2022.10093744
摘要
Recently, a lot of emphasis has been placed on Artificial Intelligence (AI) and Machine Learning (ML) algorithms in medicine and the healthcare industry. Cardiovascular disease (CVD), is one of the most common causes of death globally, and Electrocardiogram (ECG) is the most widely used diagnostic tool to investigate this disease. However, the analysis of ECG signals is a very difficult process. Therefore, in this work, automated classification of ECG data into five different arrhythmia classes is proposed, based on MIT-BIH dataset. Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) Deep Learning (DL) models were used. The black-box nature of these complex models imposes the need to explain their outcomes. Hence, both Permutation Feature Importance (PFI) with Gradient-Weighted Class Activation Maps (Grad-CAM) interpretability techniques were investigated. Using the K-Fold cross-validation method, the models achieved an accuracy of 97.1% and 98.5% for CNN and LSTM, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI