计算机科学
图像检索
嵌入
杠杆(统计)
利用
情报检索
人工智能
社会化媒体
管道(软件)
学习排名
图像(数学)
自然语言处理
排名(信息检索)
万维网
计算机安全
程序设计语言
作者
Raúl Gómez,Lluís Gómez,Jaume Gibert,Dìmosthenis Karatzas
出处
期刊:Elsevier eBooks
[Elsevier]
日期:2019-01-01
卷期号:: 279-306
被引量:6
标识
DOI:10.1016/b978-0-12-817358-9.00015-9
摘要
Self-supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human-annotated data. Web and social media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learned in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learned joint image and text embedding space. We perform a thorough analysis and performance comparison of five different state-of-the-art text embeddings in three different benchmarks. We show that the embeddings learned with web and social media data have competitive performances over supervised methods in the text-based image retrieval task, and we clearly outperform the state of the art in the MIRFlickr dataset when training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learned embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed of Instagram images and their associated texts, which can be used for fair comparison of image–text embeddings.
科研通智能强力驱动
Strongly Powered by AbleSci AI