人工智能
卷积神经网络
计算机科学
机器学习
深度学习
判别式
黑匣子
学习迁移
分割
计算智能
反向传播
透明度(行为)
癌症
模式识别(心理学)
胰腺癌
巴雷特食管
人工神经网络
医学
皮肤癌
计算机安全
作者
Luis A. de Souza,Robert Mendel,Strasser Sophia,Alanna Ebigbo,Andreas Probst,Helmut Messmann,João Paulo Papa,Christoph Palm
标识
DOI:10.1016/j.compbiomed.2021.104578
摘要
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of early-cancerous tissues in Barrett's esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts' previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts' delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model's sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts' insights, demonstrating how human knowledge may influence the correct computational learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI