计算机科学
离散余弦变换
人工智能
变压器
模式识别(心理学)
情态动词
特征提取
对抗制
计算机视觉
图像(数学)
化学
物理
量子力学
电压
高分子化学
作者
Chen Chen,Dan Wang,Bin Song,Hao Tan
标识
DOI:10.1109/tmm.2023.3243665
摘要
Image-text matching has become a challenging task in the multimedia analysis field. Many advanced methods have been used to explore local and global cross-modal correspondence in matching. However, most methods ignore the importance of eliminating potential irrelevant features in the original features of each modality and cross-modal common feature. Moreover, the features extracted from regions in images and words in sentences contain cluttered background noise and different occlusion noise, which negatively affects alignment. Different from these methods, we propose a novel DCT-Transformer Adversarial Network (DTAN) for image-text matching in this paper. This work can obtain an effective metric based on two aspects: i) DCT-Transformer uses DCT (Discrete Cosine Transform) method based on a transformer mechanism to extract multi-domain common representations and eliminate irrelevant features from different modalities (inter-modal). Among them, DCT divides multi-modal content into chunks of different frequencies and quantifies them. ii) The adversarial network introduces an adversary idea by combining the original features of various single modalities and the multi-domain common representation, alleviating the background noise within each modality (intra-modal). The proposed adversarial feature augmentation method can easily obtain the common representation that is only useful for alignment. Extensive experiments are completed on the benchmark datasets Flickr30K and MS-COCO, demonstrating the superiority of the DTAN model over the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI