Hypersphere-Based Remote Sensing Cross-Modal Text–Image Retrieval via Curriculum Learning

超球体 计算机科学 人工智能 特征学习 推论 模式识别(心理学) 稳健性(进化) 特征提取 特征(语言学) 嵌入 MNIST数据库 机器学习 深度学习 生物化学 化学 语言学 哲学 基因
作者
W Zhang,Jihao Li,Shuoke Li,Jialiang Chen,Wenkai Zhang,Xin Gao,Xian Sun
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-15 被引量:37
标识
DOI:10.1109/tgrs.2023.3318227
摘要

Remote sensing cross-modal text-image retrieval (RSCTIR) is a flexible and human-centered approach to retrieving rich information from different modalities, which has attracted plenty of attention in recent years. It remains challenging because the current methods usually ignore the varying difficulty levels of different sample pairs, stemming from the large image distribution difference and the high text similarity in the remote sensing (RS) field. Therefore, in this paper, we propose an innovative hypersphere-based visual semantic alignment (HVSA) network via curriculum learning. Specifically, we first design an adaptive alignment strategy based on curriculum learning, that aligns RS image-text pairs from easy to hard. Sample pairs with different levels of difficulty are treated unequally, and we obtain a better embedding representation when projecting the features onto the unit hypersphere. Then, to measure the robustness of cross-modal feature alignment on the unit hypersphere, we introduce the feature uniformity strategy. It reduces the occurrence of mismatching cases and improves generalization performance. Finally, we design the key-entity attention (KEA) mechanism to alleviate the problem of information imbalance among different modalities. KEA has the ability to extract information about the key entity which is aligned with textual information. Despite its conciseness, our framework achieves state-of-the-art performance on classical datasets of RSCTIR tasks while enjoying faster inference. The summed recall of HVSA on the RISCD and RSITMD is 120.97 and 198.94, 2.50 and 10.49 points ahead of the current best methods, respectively. Extensive experiments demonstrate the competitiveness of our method. The code has been released at https://github.com/ZhangWeihang99/HVSA.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
量子星尘发布了新的文献求助10
刚刚
酱攸完成签到,获得积分10
刚刚
刚刚
1秒前
1秒前
Mic发布了新的文献求助200
1秒前
777发布了新的文献求助10
1秒前
2秒前
GGGirafe发布了新的文献求助20
2秒前
2秒前
小姚发布了新的文献求助10
3秒前
bkagyin应助Wqhao采纳,获得10
3秒前
3秒前
香蕉觅云应助coolplex采纳,获得10
3秒前
阿修罗发布了新的文献求助30
4秒前
5秒前
6秒前
6秒前
汐风发布了新的文献求助10
7秒前
美好斓发布了新的文献求助30
7秒前
lllll发布了新的文献求助10
7秒前
浪子应助舒服的摇伽采纳,获得10
7秒前
Twbzz完成签到,获得积分20
7秒前
456完成签到,获得积分10
8秒前
852应助Huang采纳,获得10
8秒前
爆米花应助Ryo采纳,获得10
8秒前
8秒前
chen完成签到,获得积分10
9秒前
小瑞发布了新的文献求助10
9秒前
共享精神应助TY采纳,获得10
10秒前
haimianbaobao完成签到 ,获得积分10
10秒前
情怀应助sghsh采纳,获得10
10秒前
科研通AI6应助dongjingbutaire采纳,获得10
10秒前
456发布了新的文献求助10
10秒前
kkk完成签到,获得积分10
10秒前
Cynthia发布了新的文献求助10
11秒前
量子星尘发布了新的文献求助10
12秒前
12秒前
宣千易发布了新的文献求助10
12秒前
柔弱的便当完成签到,获得积分10
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Exploring Nostalgia 500
Natural Product Extraction: Principles and Applications 500
Exosomes Pipeline Insight, 2025 500
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 500
Advanced Memory Technology: Functional Materials and Devices 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5667660
求助须知:如何正确求助?哪些是违规求助? 4887012
关于积分的说明 15121059
捐赠科研通 4826441
什么是DOI,文献DOI怎么找? 2584044
邀请新用户注册赠送积分活动 1538066
关于科研通互助平台的介绍 1496210