可解释性
计算机科学
人工智能
领域(数学)
透视图(图形)
图像(数学)
上下文图像分类
特征(语言学)
代表(政治)
特征提取
可靠性(半导体)
机器学习
集合(抽象数据类型)
模式识别(心理学)
医学影像学
数据挖掘
深度学习
人工神经网络
特征学习
医学诊断
图像融合
比例(比率)
统计分类
聚类分析
可信赖性
图像处理
计算机视觉
特征检测(计算机视觉)
特征向量
骨干网
作者
Junlai Qiu,Junyue Cao,Yawen Huang,Ziwei Zhu,Fubo Wang,Cheng Lu,Yuexiang Li,Yefeng Zheng
标识
DOI:10.1109/tmi.2025.3612188
摘要
In the field of medical image analysis, medical image classification is one of the most fundamental and critical tasks. Current researches often rely on the off-the-shelf backbone networks derived from the field of computer vision, hoping to achieve satisfactory classification performance for medical images. However, given the characteristics of medical images, such as scattered distribution and varying sizes of lesions, features extracted with a single scale from the existing backbones often fail to perform accurate medical image classification. To this end, we propose a novel multi-scale learning paradigm, namely MUlti-SCale Learning with trusted Evidences (MUSCLE), which extracts and integrates features from different scales based on shape the theory of evidence, to generate the more comprehensive feature representation for the medical image classification task. Particularly, the proposed MUSCLE first estimates the uncertainties of features extracted from different scales/stages of the classification backbone as the evidences, and accordingly form the opinions regarding to the feature trustworthiness via a set of evidential deep neural networks. Then, these opinions on different scales of features are ensembled to yield an aggregated opinion, which can be used to adaptively tune the weights of multi-scale features for scatteredly distributed and size-varying lesions, and consequently improve the network capacity for accurate medical image classification. Our MUSCLE paradigm has been evaluated on five publicly available medical image datasets. The experimental results show that the proposed MUSCLE not only improves the accuracy of the original backbone network, but also enhances the reliability and interpretability of model decisions with the trusted evidences (https://github.com/Q4CS/MUSCLE).
科研通智能强力驱动
Strongly Powered by AbleSci AI