判别式
计算机科学
域适应
人工智能
分歧(语言学)
歧管(流体力学)
非线性降维
模式识别(心理学)
适应(眼睛)
领域(数学分析)
歧管对齐
特征(语言学)
不变(物理)
分布(数学)
机器学习
数学
降维
物理
工程类
机械工程
数学分析
哲学
语言学
光学
分类器(UML)
数学物理
作者
Siya Yao,Qi Kang,MengChu Zhou,Muhyaddin Rawa,Aiiad Albeshri
标识
DOI:10.1109/tsmc.2022.3195239
摘要
Domain adaptation (DA) aims to accomplish tasks on unlabeled target data by learning and transferring knowledge from related source domains. In order to learn a discriminative and domain-invariant model, a critical step is to align source and target data well and thus reduce their distribution divergence. But existing DA methods mainly align the global feature distributions in distorted original space, which neglects their fine-grained local information and intrinsic geometrical structures. Moreover, some methods rely heavily on pseudo-labels to align features, which may undermine adaptation performance and lead to negative transfer. We propose an efficient discriminative manifold distribution alignment (DMDA) approach, which improves feature transferability by aligning both global and local distributions and refines a discriminative model by learning geometrical structures in manifold space. In addition, when learning geometrical structures, DMDA is exempt from the uncertainty and error brought by pseudo-labels of a target domain. It is very concise and efficient to be implemented by integrating learning steps and obtaining solutions directly. Extensive experiments on 68 DA tasks from seven benchmarks and subsequent analyses show that DMDA outperforms the compared methods in both classification accuracy and time efficiency, thus representing a significant advance in the DA field.
科研通智能强力驱动
Strongly Powered by AbleSci AI