计算机科学
域适应
适应(眼睛)
人工智能
领域(数学分析)
自然语言处理
机器学习
数学分析
物理
数学
分类器(UML)
光学
作者
Yuxiang Yang,Yun Hai Hou,Lu Wen,Pinxian Zeng,Yan Wang
标识
DOI:10.1109/lsp.2024.3389508
摘要
Universal multi-source domain adaptation (UniMDA) aims to transfer the knowledge from multiple labeled source domains to an unlabeled target domain without constraints on the label space. Due to its inherent domain shift (different data distributions) and class shift (unknown target classes), UniMDA stands as an extremely challenging task. However, existing solutions mainly focus on excavating image features to detect unknown samples, ignoring the abundant information contained in the textual semantics. In this paper, we propose a Semantic-aware Adaptive Prompt Learning method based on Contrastive Language Image Pretraining (SAP-CLIP) for UniMDA classification tasks. Concretely, we utilize the CLIP with learnable prompts to leverage textual information of both class semantics and domain representations, thus helping the model detect unknown samples and tackle domain shifts. Besides, we propose a novel margin loss with a dynamic scoring function to enlarge the margin distance between known and unknown sample sets, facilitating a more precise classification. Experiment results on three benchmarks confirm the state-of-the-art performance of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI