一般化
计算机科学
领域(数学分析)
人工智能
提取器
特征(语言学)
机器学习
人工神经网络
模式识别(心理学)
对抗制
数学
工艺工程
语言学
工程类
数学分析
哲学
作者
Jianlong Fu,Yuan Zhong,Feng Yang
标识
DOI:10.1109/icarm54641.2022.9959388
摘要
The performance of deep neural networks deteriorates when the domain representing the underlying data distribution changes during training and testing. Domain generalization expects learning from multiple source domains to improve generalization to never-before-seen target domains. We propose hybrid domain generalization using source domain and multiple latent domains as a new research scenario, and we attempt to train a generalization model that self-generates latent domain labels. In order to solve this scenario, we use the MixStyle to generate latent domain samples and assume that the styles of the samples are closely related to their domains. Therefore, we propose that GMM cluster latent domains according to style features and iteratively assign pseudo domain labels before introducing them into adversarial training. By using image style features, Our proposed method successfully synthesizes latent domains and achieves adversarial domain generalization without latent domain labels. Meanwhile, considering that the original domain labels are underutilized, this method introduces an auxiliary feature extractor to improve the performance of the model. Experiments demonstrate that our method has excellent generalization performance and outperforms classical domain generalization methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI