隐藏字幕
计算机科学
变压器
解码方法
先验与后验
语言模型
源代码
利用
人工智能
自然语言处理
图像(数学)
计算机工程
程序设计语言
算法
电压
物理
哲学
认识论
量子力学
计算机安全
作者
Marcella Cornia,Matteo Stefanini,Lorenzo Baraldi,Rita Cucchiara
标识
DOI:10.1109/cvpr42600.2020.01059
摘要
Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M 2 - a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features. Experimentally, we investigate the performance of the M 2 Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly available at: https://github.com/aimagelab/meshed-memory-transformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI