计算机科学
人工智能
机器学习
采样(信号处理)
特征(语言学)
样品(材料)
借记
可靠性(半导体)
模式识别(心理学)
功能(生物学)
人工神经网络
图像(数学)
计算机视觉
心理学
语言学
哲学
化学
功率(物理)
物理
滤波器(信号处理)
色谱法
量子力学
进化生物学
生物
认知科学
作者
Hengkui Dong,Xianzhong Long,Yun Li
标识
DOI:10.1007/s11063-024-11522-2
摘要
Abstract Contrastive learning has emerged as an essential approach in self-supervised visual representation learning. Its main goal is to maximize the similarities between augmented versions of the same image (positive pairs), while minimizing the similarities between different images (negative pairs). Recent studies have demonstrated that harder negative samples, i.e., those that are more challenging to differentiate from the anchor sample perform a more crucial function in contrastive learning. However, many existing contrastive learning methods ignore the role of hard negative samples. In order to provide harder negative samples for the network model more efficiently. This paper proposes a novel feature-level sample sampling method, namely sampling synthetic hard negative samples for contrastive learning (SSCL). Specifically, we generate more and harder negative samples by mixing them through linear combination and ensure their reliability by debiasing. Finally, we execute weighted sampling of these negative samples. Compared to state-of-the-art methods, our method can provide more high-quality negative samples. Experiments show that SSCL improves the classification performance on different image datasets and can be readily integrated into existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI