计算机科学
散列函数
人工智能
模式识别(心理学)
量化(信号处理)
特征(语言学)
熵(时间箭头)
二进制代码
二进制数
计算机视觉
数学
量子力学
算术
语言学
物理
哲学
计算机安全
作者
Qinkang Gong,Liangdao Wang,Hanjiang Lai,Yan Pan,Jian Yin
出处
期刊:Cornell University - arXiv
日期:2022-01-14
被引量:1
标识
DOI:10.48550/arxiv.2201.05541
摘要
Unsupervised image hashing, which maps images into binary codes without supervision, is a compressor with a high compression rate. Hence, how to preserving meaningful information of the original data is a critical problem. Inspired by the large-scale vision pre-training model, known as ViT, which has shown significant progress for learning visual representations, in this paper, we propose a simple information-preserving compressor to finetune the ViT model for the target unsupervised hashing task. Specifically, from pixels to continuous features, we first propose a feature-preserving module, using the corrupted image as input to reconstruct the original feature from the pre-trained ViT model and the complete image, so that the feature extractor can focus on preserving the meaningful information of original data. Secondly, from continuous features to hash codes, we propose a hashing-preserving module, which aims to keep the semantic information from the pre-trained ViT model by using the proposed Kullback-Leibler divergence loss. Besides, the quantization loss and the similarity loss are added to minimize the quantization error. Our method is very simple and achieves a significantly higher degree of MAP on three benchmark image datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI