计算机科学
熵(时间箭头)
交叉熵
一致性(知识库)
域适应
人工智能
数据挖掘
算法
机器学习
模式识别(心理学)
量子力学
分类器(UML)
物理
作者
Viraj Prabhu,Shivam Khare,Deeksha Kartik,Judy Hoffman
标识
DOI:10.1109/iccv48922.2021.00844
摘要
Many existing approaches for unsupervised domain adaptation (UDA) focus on adapting under only data distribution shift and offer limited success under additional cross-domain label distribution shift. Recent work based on self-training using target pseudolabels has shown promise, but on challenging shifts pseudolabels may be highly unreliable and using them for self-training may lead to error accumulation and domain misalignment. We propose Selective Entropy Optimization via Committee Consistency (SENTRY), a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations. Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones. In combination with pseudolabel-based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift. Our code is available at https://github.com/virajprabhu/SENTRY.
科研通智能强力驱动
Strongly Powered by AbleSci AI