计算机科学
分歧(语言学)
鉴别器
人工智能
领域(数学分析)
贝叶斯概率
度量(数据仓库)
边际分布
模式识别(心理学)
特征(语言学)
机器学习
先验概率
数据挖掘
数学
统计
数学分析
电信
哲学
语言学
探测器
随机变量
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2022-11-24
卷期号:520: 183-193
被引量:13
标识
DOI:10.1016/j.neucom.2022.11.070
摘要
Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain. The key principle of UDA is to minimize the divergence between the source and the target domains. To follow this principle, many methods employ a domain discriminator to match the feature distributions. Some recent methods evaluate the discrepancy between two predictions on target samples to detect those that deviate from the source distribution. However, their performance is limited because they either match the marginal distributions or measure the divergence conservatively. In this paper, we present a novel UDA method that learns domain-invariant features that minimize the domain divergence. We propose model uncertainty as a measure of the domain divergence. Our UDA method based on model uncertainty (MUDA) adopts a Bayesian framework and provides an efficient way to evaluate model uncertainty by means of Monte Carlo dropout sampling. Experiment results on image recognition tasks show that our method is superior to existing state-of-the-art methods. We also extend MUDA to multi-source domain adaptation problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI