S4DL: Shift-Sensitive Spatial–Spectral Disentangling Learning for Hyperspectral Image Unsupervised Domain Adaptation

高光谱成像 适应(眼睛) 域适应 模式识别(心理学) 人工智能 无监督学习 计算机科学 物理 光学 分类器(UML)
作者
Jie Feng,Tianshu Zhang,Junpeng Zhang,Ronghua Shang,Weisheng Dong,Guangming Shi,Licheng Jiao
出处
期刊:IEEE transactions on neural networks and learning systems [Institute of Electrical and Electronics Engineers]
卷期号:: 1-15
标识
DOI:10.1109/tnnls.2025.3556386
摘要

Unsupervised domain adaptation (UDA) techniques, extensively studied in hyperspectral image (HSI) classification, aim to use labeled source domain data and unlabeled target domain data to learn domain invariant features for cross-scene classification. Compared to natural images, numerous spectral bands of HSIs provide abundant semantic information, but they also increase the domain shift significantly. In most existing methods, both explicit alignment and implicit alignment simply align feature distribution, ignoring domain information in the spectrum. We noted that when the spectral channel between source and target domains is distinguished obviously, the transfer performance of these methods tends to deteriorate. Additionally, their performance fluctuates greatly owing to the varying domain shifts across various datasets. To address these problems, a novel shift-sensitive spatial-spectral disentangling learning (S4DL) approach is proposed. In S4DL, gradient-guided spatial-spectral decomposition (GSSD) is designed to separate domain-specific and domain-invariant representations by generating tailored masks under the guidance of the gradient from domain classification. A shift-sensitive adaptive monitor is defined to adjust the intensity of disentangling according to the magnitude of domain shift. Furthermore, a reversible neural network is constructed to retain domain information that lies not only in semantic but also the shallow-level detailed information. Extensive experimental results on several cross-scene HSI datasets consistently verified that S4DL is better than the state-of-the-art UDA methods. Our source code will be available at https://github.com/xdu-jjgs/IEEE_TNNLS_S4DL.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
田様应助lu采纳,获得10
1秒前
寻风发布了新的文献求助10
3秒前
5秒前
shelly0621发布了新的文献求助10
6秒前
6秒前
SYLH应助现实的中道采纳,获得10
10秒前
10秒前
10秒前
莴苣发布了新的文献求助10
11秒前
12秒前
DRDOC完成签到,获得积分10
14秒前
14秒前
fev123完成签到,获得积分0
15秒前
15秒前
15秒前
原子完成签到,获得积分10
15秒前
咕咕发布了新的文献求助10
16秒前
酷酷的冰真应助Dr彭0923采纳,获得50
16秒前
16秒前
17秒前
脑洞疼应助Wu采纳,获得10
19秒前
BY完成签到,获得积分10
19秒前
杨杨发布了新的文献求助10
20秒前
爆米花应助collapsar1采纳,获得10
20秒前
21秒前
幻心完成签到,获得积分10
21秒前
22秒前
叮ding发布了新的文献求助10
22秒前
wang完成签到 ,获得积分10
23秒前
柯一一应助风清扬采纳,获得10
23秒前
鱼鱼鱼完成签到 ,获得积分10
25秒前
哈哈爷发布了新的文献求助10
25秒前
朱之欣完成签到,获得积分10
26秒前
李锐完成签到,获得积分10
26秒前
Mhj13810发布了新的文献求助10
27秒前
小蔡要加辣完成签到,获得积分10
28秒前
橙子完成签到,获得积分10
28秒前
李健的小迷弟应助叮ding采纳,获得10
29秒前
小黑球发布了新的文献求助10
31秒前
高分求助中
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Technical Brochure TB 814: LPIT applications in HV gas insulated switchgear 1000
Nucleophilic substitution in azasydnone-modified dinitroanisoles 500
不知道标题是什么 500
A Preliminary Study on Correlation Between Independent Components of Facial Thermal Images and Subjective Assessment of Chronic Stress 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3965250
求助须知:如何正确求助?哪些是违规求助? 3510588
关于积分的说明 11154044
捐赠科研通 3244907
什么是DOI,文献DOI怎么找? 1792684
邀请新用户注册赠送积分活动 873943
科研通“疑难数据库(出版商)”最低求助积分说明 804115