计算机科学
一般化
发电机(电路理论)
分歧(语言学)
水准点(测量)
人工智能
领域(数学分析)
一致性(知识库)
任务(项目管理)
语义学(计算机科学)
机器学习
程序设计语言
功率(物理)
数学
经济
哲学
管理
数学分析
地理
物理
量子力学
语言学
大地测量学
作者
Kaiyang Zhou,Yongxin Yang,Timothy M. Hospedales,Tao Xiang
标识
DOI:10.1007/978-3-030-58517-4_33
摘要
This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model’s ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI