计算机科学
关系(数据库)
人工智能
图像(数学)
匹配(统计)
对象(语法)
模式识别(心理学)
词(群论)
空间关系
语义学(计算机科学)
编码
自然语言处理
计算机视觉
数据挖掘
数学
统计
几何学
程序设计语言
生物化学
化学
基因
作者
Feiran Huang,Xiaoming Zhang,Zhonghua Zhao,Zhoujun Li
标识
DOI:10.1109/tip.2018.2882225
摘要
Image-text matching by deep models has recently made remarkable achievements in many tasks, such as image caption and image search. A major challenge of matching the image and text lies in that they usually have complicated underlying relations between them and simply modeling the relations may lead to suboptimal performance. In this paper, we develop a novel approach bi-directional spatial-semantic attention network, which leverages both the word to regions (W2R) relation and visual object to words (O2W) relation in a holistic deep framework for more effectively matching. Specifically, to effectively encode the W2R relation, we adopt LSTM with bilinear attention function to infer the image regions which are more related to the particular words, which is referred as the W2R attention networks. On the other side, the O2W attention networks are proposed to discover the semantically close words for each visual object in the image, i.e., the visual O2W relation. Then, a deep model unifying both of the two directional attention networks into a holistic learning framework is proposed to learn the matching scores of image and text pairs. Compared to the existing image-text matching methods, our approach achieves state-of-the-art performance on the datasets of Flickr30K and MSCOCO.
科研通智能强力驱动
Strongly Powered by AbleSci AI