散列函数
计算机科学
对抗制
蒸馏
图像(数学)
人工智能
图像检索
模式识别(心理学)
计算机视觉
计算机安全
化学
色谱法
作者
Ping Feng,Hanyun Zhang
标识
DOI:10.1109/cacml55074.2022.00127
摘要
Hash algorithms have become the mainstream of large-scale similarity image retrieval due to their high storage and search efficiency. The deep learning-based hashing greatly improves the retrieval performance with supervision, but it is difficult for the self-supervised deep hashing to achieve satisfactory performance when there is a lack of reliable supervised signals. In addition, to solve the problems of poor robustness and numerous parameters in traditional neural networks, a lightweight robust deep hash retrieval algorithm is proposed in this paper. The algorithm obtained a robust teacher network by self-supervised adversarial training, then trained the student network using optimized distillation loss and immune injection, and finally extracted image hash sequences using an attention mechanism based on convolution modules. Inspired by the fact that adversarial training is currently the most effective method to improve model robustness, and that knowledge distillation can compress the network while ensuring model performance, this paper proposes a self-supervised image hash retrieval algorithm based on adversarial distillation. The method was tested on three public datasets and compared with other hash algorithms, all of which showed satisfactory results. Specifically, the mAP of the adversarial distillation algorithm proposed in this paper is 3% and 2% higher than that of the next best SGH method at 64bits and 128bits hash lengths respectively. The experiments show that the hash retrieval model constructed in this paper has good performance while ensuring lightweight and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI