高光谱成像
适应(眼睛)
域适应
模式识别(心理学)
人工智能
无监督学习
计算机科学
物理
光学
分类器(UML)
作者
Jie Feng,Tianshu Zhang,Junpeng Zhang,Ronghua Shang,Weisheng Dong,Guangming Shi,Licheng Jiao
标识
DOI:10.1109/tnnls.2025.3556386
摘要
Unsupervised domain adaptation (UDA) techniques, extensively studied in hyperspectral image (HSI) classification, aim to use labeled source domain data and unlabeled target domain data to learn domain invariant features for cross-scene classification. Compared to natural images, numerous spectral bands of HSIs provide abundant semantic information, but they also increase the domain shift significantly. In most existing methods, both explicit alignment and implicit alignment simply align feature distribution, ignoring domain information in the spectrum. We noted that when the spectral channel between source and target domains is distinguished obviously, the transfer performance of these methods tends to deteriorate. Additionally, their performance fluctuates greatly owing to the varying domain shifts across various datasets. To address these problems, a novel shift-sensitive spatial-spectral disentangling learning (S4DL) approach is proposed. In S4DL, gradient-guided spatial-spectral decomposition (GSSD) is designed to separate domain-specific and domain-invariant representations by generating tailored masks under the guidance of the gradient from domain classification. A shift-sensitive adaptive monitor is defined to adjust the intensity of disentangling according to the magnitude of domain shift. Furthermore, a reversible neural network is constructed to retain domain information that lies not only in semantic but also the shallow-level detailed information. Extensive experimental results on several cross-scene HSI datasets consistently verified that S4DL is better than the state-of-the-art UDA methods. Our source code will be available at https://github.com/xdu-jjgs/IEEE_TNNLS_S4DL.
科研通智能强力驱动
Strongly Powered by AbleSci AI