概率逻辑
计算机科学
嵌入
情态动词
人工智能
编码(集合论)
模式
情报检索
集合(抽象数据类型)
高分子化学
程序设计语言
化学
社会学
社会科学
作者
Sanghyuk Chun,Seong Joon Oh,Rafael Sampaio de Rezende,Yannis Kalantidis,Diane Larlus
标识
DOI:10.1109/cvpr46437.2021.00831
摘要
Cross-modal retrieval methods build a common representation space for samples from multiple modalities, typically from the vision and the language domains. For images and their captions, the multiplicity of the correspondences makes the task particularly challenging. Given an image (respectively a caption), there are multiple captions (respectively images) that equally make sense. In this paper, we argue that deterministic functions are not sufficiently powerful to capture such one-to-many correspondences. Instead, we propose to use Probabilistic Cross-Modal Embedding (PCME), where samples from the different modalities are represented as probabilistic distributions in the common embedding space. Since common benchmarks such as COCO suffer from non-exhaustive annotations for cross-modal matches, we propose to additionally evaluate retrieval on the CUB dataset, a smaller yet clean database where all possible image-caption pairs are annotated. We extensively ablate PCME and demonstrate that it not only improves the retrieval performance over its deterministic counterpart but also provides uncertainty estimates that render the embeddings more interpretable. Code is available at https://github.com/naver-ai/pcme.
科研通智能强力驱动
Strongly Powered by AbleSci AI