加权
分割
人工智能
计算机科学
图像分割
班级(哲学)
平衡(能力)
模式识别(心理学)
计算机视觉
自然语言处理
医学
物理医学与康复
放射科
作者
Zichen Liang,Yusong Hu,Fei Yang,Xialei Liu
标识
DOI:10.1109/tip.2025.3576477
摘要
Continual Semantic Segmentation (CSS) primarily aims to continually learn new semantic segmentation categories while avoiding catastrophic forgetting. In semantic segmentation tasks, images can comprise both familiar old categories and novel unseen categories and they are treated as background in the incremental stage. Therefore, it is necessary to utilize the old model to generate pseudo-labels. However, the quality of these pseudo-labels significantly influences the model's forgetting of the old categories. Erroneous pseudo-labels can introduce harmful gradients, thus exacerbating model forgetting. In addition, the issue of class imbalance poses a significant challenge within the realm of CSS. Although traditional methods frequently diminish the emphasis placed on new classes to address this imbalance, we discover that the imbalance extends beyond the distinction between old and new classes. In this paper, we specifically address two previously overlooked problems in CSS: the impact of erroneous pseudo-labels on model forgetting and the confusion induced by class imbalance. We propose an Uncertainty and Class Balance Re-weighting approach (UCB) that assigns higher weights to pixels with pseudo-labels exhibiting lower uncertainty and to categories with smaller proportions during the training process. Our proposed approach enhances the impact of essential pixels during the continual learning process, thereby reducing model forgetting and dynamically balancing category weights based on the dataset. Our method is simple yet effective and can be applied to any method that uses pseudo-labels. Extensive experiments on the Pascal-VOC and ADE20K datasets demonstrate the efficacy of our approach in improving model performance across three state-of-the-art methods. The code will be available at https://github.com/JACK-Chen-2019/UCB.
科研通智能强力驱动
Strongly Powered by AbleSci AI