隐藏字幕
计算机科学
光学(聚焦)
水准点(测量)
语义计算
人工智能
情态动词
语义学(计算机科学)
语义相似性
自然语言处理
情报检索
图像(数学)
语义网
物理
化学
大地测量学
高分子化学
光学
程序设计语言
地理
作者
Yunpeng Liu,Xiangrong Zhang,Xina Cheng,Xu Tang,Licheng Jiao
标识
DOI:10.1016/j.patcog.2023.109893
摘要
Tremendous progresses have been made in remote sensing image captioning (RSIC) task in recent years, yet there still some unresolved problems: (1) facing the gap between the visual features and semantic concepts, (2) reasoning the higher-level relationships between semantic concepts. In this work, we focus on injecting high-level visual-semantic interaction into RSIC model. Firstly, the semantic concept extractor (SCE), end-to-end trainable, precisely captures the semantic concepts contained in the RSIs. In particular, the visual-semantic co-attention (VSCA) is designed to grain coarse concept-related regions and region-related concepts for multi-modal interaction. Furthermore, we incorporate the two types of attentive vectors with semantic-level relational features into a consensus exploitation (CE) block for learning cross-modal consensus-aware knowledge. The experiments on three benchmark data sets show the superiority of our approach compared with the reference methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI