子空间拓扑
线性子空间
嵌入
计算机科学
模式识别(心理学)
投影(关系代数)
稳健性(进化)
人工智能
高斯分布
高斯噪声
算法
随机子空间法
随机投影
主成分分析
数学
生物化学
量子力学
基因
物理
几何学
化学
作者
Xi Peng,Jiwen Lu,Yi Zhang,Rui Yan
标识
DOI:10.1109/tcyb.2016.2572306
摘要
In this paper, we address two challenging problems in unsupervised subspace learning: 1) how to automatically identify the feature dimension of the learned subspace (i.e., automatic subspace learning) and 2) how to learn the underlying subspace in the presence of Gaussian noise (i.e., robust subspace learning). We show that these two problems can be simultaneously solved by proposing a new method [(called principal coefficients embedding (PCE)]. For a given data set , PCE recovers a clean data set from and simultaneously learns a global reconstruction relation of . By preserving into an -dimensional space, the proposed method obtains a projection matrix that can capture the latent manifold structure of , where is automatically determined by the rank of with theoretical guarantees. PCE has three advantages: 1) it can automatically determine the feature dimension even though data are sampled from a union of multiple linear subspaces in presence of the Gaussian noise; 2) although the objective function of PCE only considers the Gaussian noise, experimental results show that it is robust to the non-Gaussian noise (e.g., random pixel corruption) and real disguises; and 3) our method has a closed-form solution and can be calculated very fast. Extensive experimental results show the superiority of PCE on a range of databases with respect to the classification accuracy, robustness, and efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI