可解释性
过度拟合
计算机科学
人工智能
机器学习
模式识别(心理学)
卷积神经网络
公制(单位)
随机森林
医学影像学
人工神经网络
运营管理
经济
作者
Yue Zhao,Dylan Agyemang,Yang Liu,J. Matthew Mahoney,Sheng Li
出处
期刊:Science Advances
[American Association for the Advancement of Science]
日期:2024-12-20
卷期号:10 (51)
标识
DOI:10.1126/sciadv.abg0264
摘要
Deep learning algorithms can extract meaningful diagnostic features from biomedical images, promising improved patient care in digital pathology. Vision Transformer (ViT) models capture long-range spatial relationships and offer robust prediction power and better interpretability for image classification tasks than convolutional neural network models. However, limited annotated biomedical imaging datasets can cause ViT models to overfit, leading to false predictions due to random noise. To address this, we introduce Training Attention and Validation Attention Consistency (TAVAC), a metric for evaluating ViT model overfitting and quantifying interpretation reproducibility. By comparing high-attention regions between training and testing, we tested TAVAC on four public image classification datasets and two independent breast cancer histological image datasets. Overfitted models showed significantly lower TAVAC scores. TAVAC also distinguishes off-target from on-target attentions and measures interpretation generalization at a fine-grained cellular level. Beyond diagnostics, TAVAC enhances interpretative reproducibility in basic research, revealing critical spatial patterns and cellular structures of biomedical and other general nonbiomedical images.
科研通智能强力驱动
Strongly Powered by AbleSci AI