计算机科学
域适应
杠杆(统计)
人工智能
领域(数学分析)
对抗制
适应(眼睛)
边距(机器学习)
机器学习
光学(聚焦)
任务(项目管理)
数学
分类器(UML)
工程类
数学分析
物理
系统工程
光学
作者
Mengzhu Wang,Shan An,Xiao Luo,Peng Xiong,Wei Yu,Junyang Chen,Zhigang Luo
标识
DOI:10.1109/icassp43922.2022.9747532
摘要
With the rapid development of vision-based deep learning (DL), it is an effective method to generate large-scale synthetic data to supplement real data to train the DL models for domain adaptation. However, previous vanilla domain adaptation methods generally assume the same label space, and such an assumption is no longer valid for a more realistic scenario where it requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes. To handle this problem, we propose an attention-based adversarial partial domain adaptation (AADA). Specifically, we leverage adversarial domain adaptation to augment the target domain by using source domain, then we can readily turn this task into a vanilla domain adaptation. Meanwhile, to accurately focus on the transferable features, we apply attention-based method to train the adversarial networks to obtain better transferable semantic features. Experiments on four benchmarks demonstrate that the proposed method outperforms existing methods by a large margin, especially on the tough domain adaptation tasks, e.g. VisDA-2017.
科研通智能强力驱动
Strongly Powered by AbleSci AI