Deep Cross-modal Proxy Hashing

计算机科学 散列函数 动态完美哈希 通用哈希 人工智能 情报检索 哈希表 双重哈希 计算机安全
作者
Rong-Cheng Tu,Xian-Ling Mao,Rongxin Tu,Binbin Bian,Chengfei Cai,Hongfa wang,Wei Wei,Heyan Huang
出处
期刊:IEEE Transactions on Knowledge and Data Engineering [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13 被引量:3
标识
DOI:10.1109/tkde.2022.3187023
摘要

Due to the high retrieval efficiency and low storage cost for cross-modal search tasks, cross-modal hashing methods have attracted considerable attention from the researchers. For the supervised cross-modal hashing methods, how to make the learned hash codes sufficiently preserve semantic information contained in the label of datapoints is the key to further enhance the retrieval performance. Hence, almost all supervised cross-modal hashing methods usually depend on defining similarities between datapoints with the label information to guide the hashing model learning fully or partly. However, the defined similarity between datapoints can only capture the label information of datapoints partially and misses abundant semantic information, which then hinders the further improvement of retrieval performance. Thus, in this paper, different from previous works, we propose a novel cross-modal hashing method without defining the similarity between datapoints, called Deep Cross-modal Proxy Hashing (DCPH). Specifically, DCPH first trains a proxy hashing network to transform each category information of a dataset into a semantic discriminative hash code, called proxy hash code. Each proxy hash code can preserve the semantic information of its corresponding category well. Next, without defining the similarity between datapoints to supervise the training process of the modality-specific hashing networks, we propose a novel margin-dynamic-softmax loss to directly utilize the proxy hashing codes as supervised information. Finally, by minimizing the novel margin-dynamic-softmax loss, the modality-specific hashing networks can be trained to generate hash codes that can simultaneously preserve the cross-modal similarity and abundant semantic information well. Extensive experiments on three benchmark datasets show that the proposed method outperforms the state-of-the-art baselines in the cross-modal retrieval tasks.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
思源应助专注的思菱采纳,获得10
1秒前
搜集达人应助愤怒的蛋挞采纳,获得10
4秒前
lsong完成签到,获得积分10
4秒前
东方三问完成签到,获得积分10
6秒前
7秒前
8秒前
9秒前
儒雅莹芝完成签到,获得积分20
9秒前
DR_Su完成签到,获得积分10
9秒前
偏偏完成签到,获得积分10
10秒前
所所应助Mimi采纳,获得10
10秒前
飘逸语风完成签到,获得积分20
10秒前
dududu发布了新的文献求助20
12秒前
12秒前
13秒前
DR_Su发布了新的文献求助10
13秒前
16秒前
20秒前
慕容思卉完成签到,获得积分10
21秒前
22秒前
will发布了新的文献求助20
22秒前
Evelyn应助DR_Su采纳,获得10
23秒前
小潘完成签到,获得积分10
25秒前
李小猫发布了新的文献求助10
25秒前
25秒前
dqz发布了新的文献求助10
27秒前
果果完成签到,获得积分20
27秒前
所所应助犹豫飞鸟采纳,获得10
28秒前
。。完成签到,获得积分10
28秒前
Chiron发布了新的文献求助10
29秒前
29秒前
li完成签到,获得积分10
30秒前
Mimi发布了新的文献求助10
30秒前
schon完成签到 ,获得积分10
31秒前
sun发布了新的文献求助30
31秒前
34秒前
呵呵发布了新的文献求助10
36秒前
852应助小柳采纳,获得30
36秒前
36秒前
bkagyin应助peng采纳,获得10
37秒前
高分求助中
请在求助之前详细阅读求助说明!!!! 20000
The Three Stars Each: The Astrolabes and Related Texts 900
Yuwu Song, Biographical Dictionary of the People's Republic of China 700
Multifunctional Agriculture, A New Paradigm for European Agriculture and Rural Development 600
Bernd Ziesemer - Maos deutscher Topagent: Wie China die Bundesrepublik eroberte 500
A radiographic standard of reference for the growing knee 400
Glossary of Geology 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2475719
求助须知:如何正确求助?哪些是违规求助? 2140322
关于积分的说明 5454306
捐赠科研通 1863636
什么是DOI,文献DOI怎么找? 926490
版权声明 562846
科研通“疑难数据库(出版商)”最低求助积分说明 495685