条件概率分布
计算机科学
领域(数学分析)
边际分布
人工智能
成对比较
域适应
特征(语言学)
线性回归
相似性(几何)
机器学习
数据挖掘
模式识别(心理学)
数学
统计
图像(数学)
随机变量
数学分析
哲学
分类器(UML)
语言学
作者
Zahra Taghiyarrenani,Sławomir Nowaczyk,Sepideh Pashami,Mohamed-Rafik Bouguelia
标识
DOI:10.1016/j.eswa.2023.119907
摘要
Domain adaptation (DA) methods facilitate cross-domain learning by minimizing the marginal or conditional distribution shift between domains. However, the conditional distribution shift is not well addressed by existing DA techniques for the cross-domain regression learning task. In this paper, we propose Multi-Domain Adaptation for Regression under Conditional shift (DARC) method. DARC constructs a shared feature space such that linear regression on top of that space generalizes to all domains. In other words, DARC aligns different domains according to the task-related information encoded in the values of the dependent variable. It is achieved using a novel Pairwise Similarity Preserver (PSP) loss function. PSP incentivizes the differences between the outcomes of any two samples, regardless of their domain(s), to match the distance between these samples in the constructed space. We perform experiments in both two-domain and multi-domain settings. The two-domain setting is helpful, especially when one domain contains few available labeled samples and can benefit from adaptation to a domain with many labeled samples. The multi-domain setting allows several domains, each with limited data, to be adapted collectively; thus, multiple domains compensate for each other's lack of data. The results from all the experiments conducted both on synthetic and real-world datasets confirm the effectiveness of DARC.
科研通智能强力驱动
Strongly Powered by AbleSci AI