Self-Supervised Correlation Learning for Cross-Modal Retrieval

计算机科学 判别式 模态(人机交互) 人工智能 特征学习 机器学习 水准点(测量) 情态动词 无监督学习 利用 相互信息 相关性 模式识别(心理学) 计算机安全 数学 几何学 大地测量学 化学 高分子化学 地理
作者
Yaxin Liu,Jianlong Wu,Leigang Qu,Tian Gan,Jianhua Yin,Liqiang Nie
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:25: 2851-2863 被引量:31
标识
DOI:10.1109/tmm.2022.3152086
摘要

Cross-modal retrieval aims to retrieve relevant data from another modality when given a query of one modality. Although most existing methods that rely on the label information of multimedia data have achieved promising results, the performance benefiting from labeled data comes at a high cost since labeling data often requires enormous labor resources, especially on large-scale multimedia datasets. Therefore, unsupervised cross-modal learning is of crucial importance in real-world applications. In this paper, we propose a novel unsupervised cross-modal retrieval method, named Self-supervised Correlation Learning (SCL), which takes full advantage of large amounts of unlabeled data to learn discriminative and modality-invariant representations. Since unsupervised learning lacks the supervision of category labels, we incorporate the knowledge from the input as a supervisory signal by maximizing the mutual information between the input and the output of different modality-specific projectors. Besides, for the purpose of learning discriminative representations, we exploit unsupervised contrastive learning to model the relationship among intra- and inter-modality instances, which makes similar samples closer and pushes dissimilar samples apart. Moreover, to further eliminate the modality gap, we use a weight-sharing scheme and minimize the modality-invariant loss in the joint representation space. Beyond that, we also extend the proposed method to the semi-supervised setting. Extensive experiments conducted on three widely-used benchmark datasets demonstrate that our method achieves competitive results compared with current state-of-the-art cross-modal retrieval approaches.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
文艺的筝应助阿美采纳,获得10
刚刚
刚刚
SPLjoker完成签到 ,获得积分10
刚刚
马大勺发布了新的文献求助10
1秒前
1秒前
rosalie发布了新的文献求助10
2秒前
万物更始发布了新的文献求助10
2秒前
2秒前
深竹月发布了新的文献求助10
4秒前
chenren发布了新的文献求助10
5秒前
chai发布了新的文献求助10
5秒前
Feng完成签到,获得积分10
5秒前
JamesPei应助陆上飞采纳,获得10
6秒前
6秒前
7秒前
sunshine应助Hz采纳,获得10
7秒前
云中竹兰完成签到,获得积分10
7秒前
超然度陈完成签到,获得积分10
7秒前
元万天完成签到,获得积分10
9秒前
10秒前
wxr发布了新的文献求助10
11秒前
lllllllllllllll完成签到,获得积分10
11秒前
12秒前
刘刘完成签到 ,获得积分10
13秒前
14秒前
14秒前
用师兄单身换论文必中完成签到,获得积分10
14秒前
15秒前
英俊的铭应助盼盼采纳,获得10
15秒前
16秒前
坦率含双发布了新的文献求助10
16秒前
心神依然发布了新的文献求助30
17秒前
Cecilia完成签到,获得积分10
17秒前
17秒前
Tzzl0226发布了新的文献求助10
19秒前
19秒前
19秒前
20秒前
zho发布了新的文献求助10
21秒前
21秒前
高分求助中
Technologies supporting mass customization of apparel: A pilot project 600
Introduction to Strong Mixing Conditions Volumes 1-3 500
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
A Field Guide to the Amphibians and Reptiles of Madagascar - Frank Glaw and Miguel Vences - 3rd Edition 400
China Gadabouts: New Frontiers of Humanitarian Nursing, 1941–51 400
The Healthy Socialist Life in Maoist China, 1949–1980 400
Walking a Tightrope: Memories of Wu Jieping, Personal Physician to China's Leaders 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3798813
求助须知:如何正确求助?哪些是违规求助? 3344550
关于积分的说明 10320522
捐赠科研通 3060978
什么是DOI,文献DOI怎么找? 1679963
邀请新用户注册赠送积分活动 806813
科研通“疑难数据库(出版商)”最低求助积分说明 763386