计算机科学
人工智能
模式识别(心理学)
图像处理
图形
图论
图像分割
上下文图像分类
图像(数学)
数学
理论计算机科学
组合数学
作者
Wei Wang,Mengzhu Wang,Chao Huang,Cong Wang,Jie Mu,Feiping Nie,Xiaochun Cao
标识
DOI:10.1109/tip.2025.3526380
摘要
Label propagation (LP) is a popular semi-supervised learning technique that propagates labels from a training dataset to a test one using a similarity graph, assuming that nearby samples should have similar labels. However, recent cross-domain problem assumes that training (source domain) and test datasets (target domain) follow different distributions, which may unexpectedly degrade the performance of LP due to small similarity weights connecting the two domains. To address this problem, we propose an approach called optimal graph learning based label propagation (OGL2P), which optimizes one cross-domain graph and two intra-domain graphs to connect the two domains and preserve domain-specific structures, respectively. During label propagation, the cross-domain graph draws two labels close if they are nearby in feature space and from different domains, while the intra-domain graph pulls two labels close if they are nearby in feature space and from the same domain. This makes label propagation more insensitive to cross-domain problem. During graph embedding, the three graphs bring two samples close in an embedded subspace if they are nearby and from the same class. This makes feature representations of the two domains in the embedded subspace are domain-invariant and locally discriminative. Moreover, we optimize the three graphs using both features and labels in the embedded subspace to make them locally discriminative and robust to feature noise. Finally, we conduct extensive experiments on five cross-domain image classification datasets to verify that OGL2P outperforms some state-of-the-art cross-domain approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI