计算机科学
人工智能
RGB颜色模型
光学(聚焦)
模式识别(心理学)
目标检测
匹配(统计)
突出
班级(哲学)
对象(语法)
编码(集合论)
特征提取
深度学习
计算机视觉
数学
物理
光学
集合(抽象数据类型)
程序设计语言
统计
作者
Huajun Zhou,Bo Qiao,Lingxiao Yang,Jianhuang Lai,Xiaohua Xie
标识
DOI:10.1109/cvpr52729.2023.00701
摘要
Deep Learning-based Unsupervised Salient Object Detection (USOD) mainly relies on the noisy saliency pseudo labels that have been generated from traditional handcraft methods or pre-trained networks. To cope with the noisy labels problem, a class of methods focus on only easy samples with reliable labels but ignore valuable knowledge in hard samples. In this paper, we propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples. First, we propose a Confidence-aware Saliency Distilling (CSD) strategy that scores samples conditioned on samples' confidences, which guides the model to distill saliency knowledge from easy samples to hard samples progressively. Second, we propose a Boundary-aware Texture Matching (BTM) strategy to refine the boundaries of noisy labels by matching the textures around the predicted boundaries. Extensive experiments on RGB, RGB-D, RGB-T, and video SOD benchmarks prove that our method achieves state-of-the-art USOD performance. Code is available at www.github.com/moothes/A2S-v2.
科研通智能强力驱动
Strongly Powered by AbleSci AI