计算机科学
图像检索
人工智能
散列函数
模式识别(心理学)
图像(数学)
计算机视觉
计算机安全
作者
Lingtao Meng,Qiuyu Zhang,Rui Yang,Yibo Huang
标识
DOI:10.1109/lsp.2024.3404350
摘要
Deep hashing enhances image retrieval accuracy by integrating hash encoding with deep neural networks. However, existing unsupervised deep hashing methods primarily rely on the rotational invariance of images to construct triplets, resulting in triplets that are unsatisfactory in both reliability and quantity. Additionally, some methods fail to adequately consider the relative similarity information between samples. To overcome these limitations, we propose a novel unsupervised deep triplet hashing method for image retrieval (abbreviated as UDTrHash). UDTrHash utilizes the extremal cosine similarity of deep features of images to construct more reliable first type triplets and expands the formed triplets through data augmentation strategies to introduce a larger number of triplets. Furthermore, we design a new triplet loss function to enhance the discriminative ability of the generated hash codes. Extensive experiments demonstrate that UDTrHash exhibits superior performance on three public benchmark datasets such as MIRFlickr25K compared to existing state-of-the-art hashing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI