稳健主成分分析
主成分分析
缩小
特征选择
计算机科学
人工智能
模式识别(心理学)
选择(遗传算法)
组分(热力学)
特征(语言学)
数学优化
机器学习
算法
数学
语言学
热力学
物理
哲学
作者
Jintang Bian,Dandan Zhao,Feiping Nie,Rong Wang,Xuelong Li
标识
DOI:10.1109/tnnls.2022.3194896
摘要
Principal component analysis (PCA) is one of the most successful unsupervised subspace learning methods and has been used in many practical applications. To deal with the outliers in real-world data, robust principal analysis models based on various measure are proposed. However, conventional PCA models can only transform features to unknown subspace for dimensionality reduction and cannot perform features' selection task. In this article, we propose a novel robust PCA (RPCA) model to mitigate the impact of outliers and conduct feature selection, simultaneously. First, we adopt σ -norm as reconstruction error (RE), which plays an important role in robust reconstruction. Second, to conduct feature selection task, we apply l2,0 -norm constraint to subspace projection. Furthermore, an efficient iterative optimization algorithm is proposed to solve the objective function with nonconvex and nonsmooth constraint. Extensive experiments conducted on several real-world datasets demonstrate the effectiveness and superiority of the proposed feature selection model.
科研通智能强力驱动
Strongly Powered by AbleSci AI