域适应
计算机科学
人工智能
上下文图像分类
模式识别(心理学)
算法
图像(数学)
统计分类
领域(数学分析)
数学
分类器(UML)
数学分析
作者
Ahmad Chaddad,Yihang Wu,Yuchen Jiang,Ahmed Bouridane,Christian Desrosiers
标识
DOI:10.1109/tim.2025.3527531
摘要
Traditional machine learning assumes that training and test sets are derived from the same distribution; however, this assumption does not always hold in practical applications. This distribution disparity can lead to severe performance drops when the trained model is used in new data sets. Domain adaptation (DA) is a machine learning technique that aims to address this problem by reducing the differences between domains. This paper presents simulation-based algorithms of recent DA techniques, mainly related to unsupervised domain adaptation (UDA), where labels are available only in the source domain. Our study compares these techniques with public data sets and diverse characteristics, highlighting their respective strengths and drawbacks. For example, Safe Self-Refinement for Transformer-based DA (SSRT) achieved the highest accuracy (91.6\%) in the office-31 data set during our simulations, however, the accuracy dropped to 72.4\% in the Office-Home data set when using limited batch sizes. In addition to improving the reader's comprehension of recent techniques in DA, our study also highlights challenges and upcoming directions for research in this domain. The codes are available at https://github.com/AIPMLab/Domain_Adaptation.
科研通智能强力驱动
Strongly Powered by AbleSci AI