隐藏字幕
计算机科学
变压器
人工智能
编码器
特征提取
特征(语言学)
语义特征
图像(数学)
计算机视觉
模式识别(心理学)
量子力学
操作系统
物理
哲学
语言学
电压
作者
Lei Liu,Yidi Jiao,Xiaoran Li,Jing Wang,Haitao Wang,Xinyu Cao
标识
DOI:10.1142/s146902682442001x
摘要
The objective of image captioning is to empower computers to generate human-like sentences autonomously, describing a provided image. To tackle the challenges of insufficient accuracy in image feature extraction and underutilization of visual information, we present a Swin Transformer-based model for image captioning with feature enhancement and multi-stage fusion (Swin-Caption). Initially, the Swin Transformer is employed in the capacity of an encoder for extracting images, while feature enhancement is adopted to gather additional image feature information. Subsequently, a multi-stage image and semantic fusion module is constructed to utilize the semantic information from past time steps. Lastly, a two-layer LSTM is utilized to decode semantic and image data, generating captions. The proposed model outperforms the baseline model in experimental tests and instance analysis on the public datasets Flickr8K, Flickr30K, and MS-COCO.
科研通智能强力驱动
Strongly Powered by AbleSci AI