聚类分析
人工智能
计算机科学
样品(材料)
模式识别(心理学)
差异(会计)
构造(python库)
特征(语言学)
假阳性悖论
k-最近邻算法
深度学习
自然语言处理
机器学习
业务
程序设计语言
化学
哲学
会计
色谱法
语言学
作者
Jun Yin,Haowei Wu,Shiliang Sun
标识
DOI:10.1016/j.inffus.2023.101899
摘要
As an indispensable branch of unsupervised learning, deep clustering is rapidly emerging along with the growth of deep neural networks. Recently, contrastive learning paradigm has been combined with deep clustering to achieve more competitive performance. However, previous works mostly employ random augmentations to construct sample pairs for contrastive clustering. Different augmentations of a sample are treated as positive sample pairs, which may result in false positives and ignore the semantic variations of different samples. To address these limitations, we present a novel end-to-end contrastive clustering framework termed Contrastive Clustering with Effective Sample pairs construction (CCES), which obtains more semantic information by jointly leveraging an effective data augmentation method ContrastiveCrop and constructing positive sample pairs based on nearest-neighbor mining. Specifically, we augment original samples by adopting ContrastiveCrop, which explicitly reduces false positives and enlarges the variance of samples. Further, with the extracted feature representations, we provide a strategy to construct positive sample pairs via a sample and its nearest neighbor for instance-wise and cluster-wise contrastive learning. Experimental results on four challenging datasets demonstrate the effectiveness of CCES for clustering, which surpasses the state-of-the-art deep clustering methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI