判别式
特征选择
规范(哲学)
人工智能
聚类分析
模式识别(心理学)
计算机科学
最优化问题
数学
特征(语言学)
数学优化
政治学
法学
语言学
哲学
作者
Xia Dong,Feiping Nie,Lai Tian,Rong Wang,Xuelong Li
标识
DOI:10.1109/tpami.2025.3580669
摘要
Feature selection plays an important role in a wide range of applications. Most sparsity-based feature selection methods solve a relaxed $\ell _{2,p}$-norm ($0 \lt p \leq 1$) regularized problem, which often results in a sub-optimal feature subset and requires extensive effort to tune regularization parameters. Optimizing the non-convex $\ell _{2,0}$-norm constrained problem remains an open challenge. Existing optimization algorithms for solving the $\ell _{2,0}$-norm constrained problem often rely on specific data distribution assumptions and cannot guarantee global convergence. In this article, we propose an unsupervised discriminative feature selection method using $\ell _{2,0}$-norm constrained sparse projection (SPDFS) to address these challenges. Specifically, building on the principle of supervised linear discriminant analysis, fuzzy membership learning and $\ell _{2,0}$-norm constrained projection learning are jointly performed to learn a feature-wise sparse projection for unsupervised discriminative feature selection. More importantly, we follow two optimization strategies to address the NP-hard nature of the problem: a non-iterative algorithm with a globally optimal solution is derived for a special case, and an iterative algorithm with both ascent property and approximation guarantee is employed for the general case. Additionally, we explore the relationship between our model and its potential variants. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method over several state-of-the-art methods in data clustering and text classification tasks. The code is available at: https://github.com/xiadongcs/SPDFS.
科研通智能强力驱动
Strongly Powered by AbleSci AI