隐藏字幕
计算机科学
变压器
自然语言
人工智能
模式
公制(单位)
特征(语言学)
可视化
自然语言处理
建筑
语言模型
答疑
图像(数学)
编码(集合论)
机器学习
集合(抽象数据类型)
程序设计语言
经济
艺术
社会科学
语言学
运营管理
物理
哲学
视觉艺术
量子力学
电压
社会学
作者
Manuele Barraco,Matteo Stefanini,Marcella Cornia,Silvia Cascianelli,Lorenzo Baraldi,Rita Cucchiara
标识
DOI:10.1109/icpr56361.2022.9955644
摘要
Describing images in natural language is a fundamental step towards the automatic modeling of connections between the visual and textual modalities. In this paper we present CaMEL, a novel Transformer-based architecture for image captioning. Our proposed approach leverages the interaction of two interconnected language models that learn from each other during the training phase. The interplay between the two language models follows a mean teacher learning paradigm with knowledge distillation. Experimentally, we assess the effectiveness of the proposed solution on the COCO dataset and in conjunction with different visual feature extractors. When comparing with existing proposals, we demonstrate that our model provides state-of-the-art caption quality with a significantly reduced number of parameters. According to the CIDEr metric, we obtain a new state of the art on COCO when training without using external data. The source code and trained models will be made publicly available at: https://github.com/aimagelab/camel.
科研通智能强力驱动
Strongly Powered by AbleSci AI