散列函数
计算机科学
图像检索
人工智能
比例(比率)
关系数据库
图像(数学)
模式识别(心理学)
自然语言处理
情报检索
程序设计语言
量子力学
物理
作者
Z.-M. Lu,Lu Jin,Zechao Li,Jinhui Tang
标识
DOI:10.1109/tmm.2023.3310333
摘要
Supervised deep hashing aims to learn hash functions using label information. Existing methods learn hash functions by employing either pairwise/triplet loss to explore the point-to-point relation or center loss to explore the point-to-class relation. However, these methods overlook the collaboration between the above two kinds of relations and the hardness of pairs. In this work, we propose a novel Self-Paced Relational Contrastive Hashing (SPRCH) method with a single learning objective to capture valuable discriminative information from hard pairs using both the point-to-point and point-to-class relations. To exploit the above two kinds of relations, the Relational Contrastive Hash (RCH) loss is proposed, which ensures that each data anchor is closer to all similar data points and corresponding class centers in the Hamming space compared to dissimilar ones. Moreover, the proposed RCH loss reduces the drastic imbalance between point-to-point pairs and point-to-class pairs by rebalancing their weights. To prioritize hard pairs, a self-paced learning schedule is proposed, assigning higher weights to these pairs in the RCH loss. The self-paced learning schedule assigns dynamic weights to pairs according to their similarities and the training process. In this way, deep hash model can initially learn universal patterns from the entire set of pairs and then gradually acquire more valuable discriminative information from hard pairs. Experimental results on four widely-used image retrieval datasets demonstrate that our proposed SPRCH method significantly outperforms the state-of-the-art supervised deep hash methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI