计算机科学
适应(眼睛)
领域(数学分析)
人工智能
模式识别(心理学)
数学
光学
物理
数学分析
作者
Aveen Dayal,S. Shrusti,Linga Reddy Cenkeramaddi,C. Krishna Mohan,Abhinav Kumar
标识
DOI:10.1109/tip.2025.3532094
摘要
In a conventional Domain Adaptation (DA) setting, we only have one source and target domain, whereas, in many real-world applications, data is often collected from several related sources in different conditions. This has led to a more practical and challenging knowledge transfer problem called Multi-source Domain Adaptation (MDA). Several methodologies, such as prototype matching, explicit distance discrepancy, adversarial learning, etc., have been considered to tackle the MDA problem in recent years. Among them, the adversarial-based learning framework is a popular methodology for transferring knowledge from multiple sources to target domains using a minmax optimization strategy. Despite the advances in adversarial-based methods, several limitations exist, such as the need for a classifier-aware discrepancy metric to align the domains and the need to consider target samples' consistency and semantic information while aligning the domains. To mitigate these issues, in this work, we propose a novel adversarial learning MDA algorithm, MDAMA, which aligns the target domain with a mixture distribution that consists of source domains. MDAMA uses margin-based discrepancy and augmented intermediate distributions to align the domains effectively. We also propose consistency of target samples by confidence thresholding and transfer of semantic information from multiple source domains to the augmented target domain to further improve the performance of the target domain. We extensively experiment with the MDAMA algorithm on popular real-world MDA datasets such as OfficeHome, Office31, PACS, Office-Caltech, and DomainNet. We evaluate the MDAMA model on these benchmark datasets and demonstrate top performance in all of them.
科研通智能强力驱动
Strongly Powered by AbleSci AI