判决
计算机科学
模态(人机交互)
人工智能
匹配(统计)
图像(数学)
自然语言处理
相似性(几何)
模式识别(心理学)
语音识别
数学
统计
作者
Xi Wei,Tianzhu Zhang,Yan Li,Yongdong Zhang,Feng Wu
标识
DOI:10.1109/cvpr42600.2020.01095
摘要
The key of image and sentence matching is to accurately measure the visual-semantic similarity between an image and a sentence. However, most existing methods make use of only the intra-modality relationship within each modality or the inter-modality relationship between image regions and sentence words for the cross-modal matching task. Different from them, in this work, we propose a novel MultiModality Cross Attention (MMCA) Network for image and sentence matching by jointly modeling the intra-modality and inter-modality relationships of image regions and sentence words in a unified deep model. In the proposed MMCA, we design a novel cross-attention mechanism, which is able to exploit not only the intra-modality relationship within each modality, but also the inter-modality relationship between image regions and sentence words to complement and enhance each other for image and sentence matching. Extensive experimental results on two standard benchmarks including Flickr30K and MS-COCO demonstrate that the proposed model performs favorably against state-of-the-art image and sentence matching methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI