人工智能
域适应
机器学习
深度学习
模式识别(心理学)
人工神经网络
领域(数学分析)
学习迁移
特征(语言学)
无监督学习
作者
Shuai Yang,Yuhong Zhang,Hao Wang,Peipei Li,Xuegang Hu
标识
DOI:10.1016/j.eswa.2020.113635
摘要
Abstract Domain adaptation aims to apply knowledge obtained from a labeled source domain to an unseen target domain from a different distribution. Recently, domain adaptation approaches based on autoencoder have achieved promising performances. However, almost of these approaches ignore the potential relationships of intra-domain features, which can be used to further reduce the distribution discrepancy between source and target domains. Furthermore, almost of them depend on the single autoencoder model, which brings the challenge of extracting multiple characteristics of data. To address these issues, in this paper, we propose a new representation learning method based on serial robust autoencoder for domain adaptation, named SERA. SERA first enriches intra-domain knowledge by mining the potential relationships of features in the source domain and target domain, respectively. Then, SERA learns domain invariant representations by serially connecting two new proposed autoencoder models, including marginalized denoising autoencoder via adaptation regularization (AMDA) and robust autoencoder via graph regularization (GRA). Extensive experiments on four public datasets demonstrate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI