计算机科学
人工智能
聚类分析
机器学习
水准点(测量)
背景(考古学)
领域(数学分析)
相似性(几何)
域适应
可靠性(半导体)
公制(单位)
无监督学习
适应(眼睛)
模式识别(心理学)
数学
光学
经济
古生物学
数学分析
功率(物理)
运营管理
物理
大地测量学
量子力学
分类器(UML)
图像(数学)
生物
地理
标识
DOI:10.1016/j.neunet.2024.106142
摘要
Conventional unsupervised domain adaptation (UDA) methods often presuppose the existence of labeled source domain samples while adapting the source model to the target domain. Nevertheless, this premise is not always tenable in the context of source-free UDA (SFUDA) attributed to data privacy considerations. Some existing methods address this challenging SFUDA problem by self-supervised learning. But inaccurate pseudo-labels are always unavoidable to degrade the performance of the target model among these methods. Therefore, we propose a promising SFUDA method, namely Generation, Division and Training (GDT) which aims to promote the reliability of pseudo-labels for self-supervised learning and encourage similar features to have closer predictions than dissimilar ones by contrastive learning. Specifically in our GDT method, we first refine pseudo-labels with deep clustering for target samples and then split them into reliable samples and unreliable samples. After that, we adopt self-supervised learning and information maximization for reliable samples training. And for unreliable samples, we conduct contrastive learning via the perspective of similarity and disparity to attract similar samples and repulse dissimilar samples, which helps pull the similar features closed and push the dissimilar features away, leading to efficient feature clustering. Thorough experimentation on three benchmark datasets substantiates the excellence of our proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI