聚类分析
计算机科学
推论
判别式
图形
数据挖掘
层次聚类
星团(航天器)
不相交集
单连锁聚类
聚类系数
人工智能
模式识别(心理学)
相关聚类
理论计算机科学
数学
CURE数据聚类算法
组合数学
程序设计语言
作者
Han Zhi-gang,Xu Yang,Cheng Deng
摘要
Deep graph clustering, efficiently dividing nodes into multiple disjoint clusters in an unsupervised manner, has become a crucial tool for analyzing ubiquitous graph data. Existing methods have acquired impressive clustering effects by optimizing the clustering network under the parametric condition – predefining the true number of clusters ( K tr ). However, K tr is inaccessible in pure unsupervised scenarios, in which existing methods are incapable of inferring the number of clusters ( K ), causing limited feasibility. This paper proposes the first Parameter-Agnostic Deep Graph Clustering method (PADGC), which consists of two core modules: K -guidence clustering and topological-hierarchical inference, to infer K efficiently and gain impressive clustering predictions. Specifically, K -guidence clustering is employed to optimize the cluster assignments and discriminative embeddings in a mutual promotion manner under the latest updated K , even though K may deviate from K tr . In turn, such optimized cluster assignments are utilized to explore more accurate K in the topological-hierarchical inference, which can split the dispersive clusters and merge the coupled ones. In this way, these two modules are complementarily optimized until generating the final convergent K and discriminative cluster assignments. Extensive experiments on several benchmarks, including graphs and images, can demonstrate the superiority of our method. The mean values of our inferred K , in 11 out of 12 datasets, deviates from K tr by less than 1. Our method can also achieve competitive clustering effects with existing parametric deep graph clustering.
科研通智能强力驱动
Strongly Powered by AbleSci AI