计算机科学
模式
特征(语言学)
模态(人机交互)
过程(计算)
相似性(几何)
相互信息
特征学习
人工智能
情态动词
代表(政治)
模式识别(心理学)
图像(数学)
哲学
法学
高分子化学
化学
社会学
政治学
操作系统
政治
语言学
社会科学
作者
Jinghao Huang,Yaxiong Chen,Shengwu Xiong,Xiaoqiang Lu
标识
DOI:10.1109/tgrs.2024.3407857
摘要
An important challenge that existing work has yet to address is the relatively small differences in audio representations compared to the rich content provided by remote sensing images, making it easy to overlook certain details in the images. This imbalance in information between modalities poses a challenge in maintaining consistent representations. In response to this challenge, we propose a novel cross-modal RSIA retrieval method called Adaptive Learning for Aligning Correlation (ALAC). ALAC integrates region-level learning into image annotation through a region-enhanced learning attention module. By collaboratively suppressing features at different region levels, ALAC is able to provide a more comprehensive visual feature representation. Additionally, a novel adaptive knowledge transfer strategy has been proposed, which guides the learning process of the frontend network using aligned feature vectors. This approach allows the model to adaptively acquire alignment information during the learning process, thereby facilitating better alignment between the two modalities. Finally, to better utilize mutual information between different modalities, we introduce a plug-and-play result rerank module. This module optimizes the similarity matrix by using retrieval mutual information between modalities as weights, significantly improving retrieval accuracy. Experimental results on four RSIA datasets demonstrate that ALAC outperforms other methods in retrieval performance. Compared to state-of-the-art methods, improvements of 1.49%, 2.25%, 4.24% and 1.33% were respectively achieved by ALAC. The codes are accessible at https://github.com/huangjh98/ALAC.
科研通智能强力驱动
Strongly Powered by AbleSci AI