计算机科学
判别式
人工智能
杠杆(统计)
域适应
模式识别(心理学)
蒸馏
机器学习
利用
领域(数学分析)
特征(语言学)
数据挖掘
数学
分类器(UML)
数学分析
化学
有机化学
语言学
哲学
计算机安全
作者
Kun Wei,Xu Yang,Zhe Xu,Cheng Deng
标识
DOI:10.1109/tip.2024.3357258
摘要
Class-Incremental Unsupervised Domain Adaptation (CI-UDA) requires the model can continually learn several steps containing unlabeled target domain samples, while the source-labeled dataset is available all the time. The key to tackling CI-UDA problem is to transfer domain-invariant knowledge from the source domain to the target domain, and preserve the knowledge of the previous steps in the continual adaptation process. However, existing methods introduce much biased source knowledge for the current step, causing negative transfer and unsatisfying performance. To tackle these problems, we propose a novel CI-UDA method named Pseudo-Label Distillation Continual Adaptation (PLDCA). We design Pseudo-Label Distillation module to leverage the discriminative information of the target domain to filter the biased knowledge at the class- and instance-level. In addition, Contrastive Alignment is proposed to reduce domain discrepancy by aligning the class-level feature representation of the confident target samples and the source domain, and exploit the robust feature representation of the unconfident target samples at the instance-level. Extensive experiments demonstrate the effectiveness and superiority of PLDCA. Code is available at code.
科研通智能强力驱动
Strongly Powered by AbleSci AI