计算机科学
新闻聚合器
人工智能
模式识别(心理学)
特征(语言学)
图形
特征学习
机器学习
理论计算机科学
哲学
语言学
操作系统
作者
Cong Cong,Sidong Liu,Priyanka Rana,Maurice Pagnucco,Antonio Di Ieva,Shlomo Berkovsky,Yang Song
标识
DOI:10.1016/j.eswa.2024.123783
摘要
Medical image datasets are often imbalanced due to biases in data collection and limitations in acquiring data for rare conditions. Addressing class imbalance is crucial for developing reliable deep-learning algorithms capable of effectively handling all classes. Recent class imbalanced methods have investigated the effectiveness of self-supervised learning (SSL) and demonstrated that such learned features offer increased resilience to class imbalance issues and obtain much improved performances over other types of class imbalanced methods. However, existing SSL methods either lack end-to-end capabilities or require substantial memory resources, potentially resulting in sub-optimal features and classifiers and limiting their practical usage. Moreover, the conventional pooling operations (e.g., max-pooling, or average-pooling) tend to generate less discriminative features when datasets pose high inter-class similarities. To alleviate the above issues, in this study, we present a novel end-to-end self-supervised learning framework tailored for imbalanced medical image datasets. Our framework constitutes an adaptive contrastive loss that can dynamically adjust the model's learning focus between feature learning and classifier learning and a feature aggregation mechanism based on Graph Neural Networks to further enhance feature discriminability. We evaluate the effectiveness of our framework on four medical datasets, and the experimental results highlight its superior performance in imbalanced image classification tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI