计算机科学
散列函数
杠杆(统计)
图像检索
汉明空间
成对比较
人工智能
相似性(几何)
局部敏感散列
可扩展性
模式识别(心理学)
计算
机器学习
数据挖掘
图像(数学)
汉明码
哈希表
算法
数据库
计算机安全
区块代码
解码方法
作者
Hongjia Zhai,Hai Li,Hanzhi Zhang,Hujun Bao,Guofeng Zhang
标识
DOI:10.1109/icassp49357.2023.10095251
摘要
Deep hashing-based approaches have become the optimal solutions for large-scale image retrieval task due to their high computational efficiency and low storage burden. Some methods leverage a large teacher network to improve the retrieval performance of the small student network through knowledge distillation, which incurs high computational and time costs. In this paper, we propose Self-Distillation Hashing (SeDH), which improves the image retrieval performance without introducing a complex teacher model and significantly reduces the overall computation costs. Specifically, we generate the soft targets via ensembling the logits of other similar images among the mini-batch. The ensembled soft targets can model the relations between different image samples, which can act as additional supervision for classification. Besides, to learn more compact features and accurate inter-sample similarities, we propose a similarity-preserving loss on the learned hashing features, which aligns the softened similarity distribution with the pairwise soft similarity. Extensive experiments demonstrate that our approach can yield state-of-the-art performance on deep supervised hashing retrieval.
科研通智能强力驱动
Strongly Powered by AbleSci AI