代码本
计算机科学
量化(信号处理)
深度学习
矢量量化
人工智能
散列函数
随机梯度下降算法
人工神经网络
图像检索
学习矢量量化
模式识别(心理学)
算法
图像(数学)
计算机安全
作者
Meihan Liu,Yongxing Dai,Yan Bai,Ling‐Yu Duan
标识
DOI:10.1109/icassp40776.2020.9054175
摘要
Product Quantization (PQ) is one of the most popular Approximate Nearest Neighbor (ANN) methods for large-scale image retrieval, bringing better performance than hashing based methods. In recent years, several works extend the hard quantization to soft quantization with specially designed deep neural architectures. We propose a simple but effective deep Product Quantization Module (PQM) to jointly learn discriminative codebook and precise hard assignment in an end-to-end manner. In this work, we use the straight-through estimator to make it feasible to directly optimize the discrete binary representations in deep neural networks with stochastic gradient descent. Different from previous deep vector quantization methods, PQM is a plug-and-play module which can be adaptive to various base networks in the scenarios of image search or compression. Besides, we propose a reconstruction loss to minimize the domain gap between the original embedding features and codebook. Experimental results show that PQM outperforms state-of-the-art deep supervised hashing and quantization methods on several image retrieval benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI