计算机科学
聚类分析
融合
人工智能
语言学
哲学
作者
Yugen Yi,Ningyi Zhang,Zehui Zhang,Yijian Fu,Lei Chen,Jianzhong Wang
标识
DOI:10.1109/tnnls.2025.3551159
摘要
Multiview clustering (MVC) with contrastive learning (CL) has attracted considerable interest. Nevertheless, current methods have specific drawbacks since the coherence between views in them is limited either at the feature representation level or the cluster representation level. Besides, certain methods demonstrate subpar performance and limited robustness when handling noisy data. This article introduces an efficient multilevel fusion CL framework for MVC called EMLFCL. The EMLFCL model seamlessly incorporates a shared multi-layer perceptron (MLP) network (MNet) and a fusion network (FNet) to capture and merge common representation information, which effectively eliminates the impact of view-specific private information during the clustering process. Specifically, we establish an efficient multilevel CL strategy at both the feature representation level and the clustering representation level. Rather than rely on pairwise comparisons between views, our proposed CL strategy makes comparisons between different views and the anchor view. Since the anchor view contains abundant shared information, this strategy effectively mitigates the influence of view-specific and noisy view information on model performance. The proposed method outperforms numerous advanced approaches, as evidenced by extensive experiments conducted on eleven challenging multiview datasets. Particularly, it achieves 66.4%, 74.7%, 82.3%, and 86.4% clustering accuracies on the four Caltech datasets with different views, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI