计算机科学
特征(语言学)
人工智能
卷积神经网络
嵌入
块(置换群论)
特征向量
特征学习
利用
情态动词
模式识别(心理学)
图像检索
计算机视觉
深度学习
图像(数学)
哲学
语言学
化学
几何学
数学
计算机安全
高分子化学
作者
Xu Tang,Yijing Wang,Jingjing Ma,Xiangrong Zhang,Fang Liu,Licheng Jiao
标识
DOI:10.1109/tgrs.2023.3280546
摘要
Cross-modal remote sensing image-text retrieval (CMRSITR) is a challenging topic in the remote sensing (RS) community. It has gained growing attention because it can be flexibly used in many practical applications. In the current deep era, with the help of deep convolutional neural networks (DCNNs), many successful CMRSITR methods have been proposed. Most of them first learn valuable features from RS images and texts respectively. Then, the obtained visual and textual features are mapped into a common space for the final retrieval. The above operations are feasible, however, two difficulties are still to be solved. One is that the semantics within the visual and textual features are misaligned due to the independent learning manner. The other one is that the deep links between RS images and texts cannot be fully explored by simple common space mapping. To overcome the above challenges, we propose a new model named interacting-enhancing feature transformer (IEFT) for CMRSITR, which regards the RS images and texts as a whole. First, a simple feature embedding module (FEM) is developed to map images and texts into the visual and textual feature spaces. Second, an information interacting-enhancing module (IIEM) is designed to simultaneously model the inner relationships between RS images and texts and enhance the visual features. IIEM consists of three feature interacting-enhancing (FIE) blocks, each of which contains an inter-modality relationship interacting (IMRI) sub-block and a visual feature enhancing (VFE) sub-block. The duty of IMRI is to exploit the hidden relations between cross-modal data, while the responsibility of VFE is to improve the visual features. By combining them, semantic bias can be mitigated, and the complex contents of RS images can be studied. Finally, the retrieval module (RM) is constructed to generate the matching scores for deciding the search results. Extensive experiments are conducted on four public RS data sets. The positive results demonstrate that our IEFT can achieve superior retrieval performance compared with many existing methods. Our source codes are available at https://github.com/TangXu-Group/Cross-modal-remote-sensing-image-and-text-retrieval-models/tree/main/IEFT.
科研通智能强力驱动
Strongly Powered by AbleSci AI