子空间拓扑
人工智能
约束(计算机辅助设计)
特征学习
代表(政治)
张量(固有定义)
规范(哲学)
数学
线性子空间
秩(图论)
计算机科学
算法
数学优化
理论计算机科学
组合数学
政治
法学
纯数学
政治学
几何学
作者
Changqing Zhang,Huazhu Fu,Jing Wang,Li Wen,Xiaochun Cao,Qinghua Hu
标识
DOI:10.1007/s11263-020-01307-0
摘要
Self-representation based subspace learning has shown its effectiveness in many applications. In this paper, we promote the traditional subspace representation learning by simultaneously taking advantages of multiple views and prior constraint. Accordingly, we establish a novel algorithm termed as Tensorized Multi-view Subspace Representation Learning. To exploit different views, the subspace representation matrices of different views are regarded as a low-rank tensor, which effectively models the high-order correlations of multi-view data. To incorporate prior information, a constraint matrix is devised to guide the subspace representation learning within a unified framework. The subspace representation tensor equipped with a low-rank constraint models elegantly the complementary information among different views, reduces redundancy of subspace representations, and then improves the accuracy of subsequent tasks. We formulate the model with a tensor nuclear norm minimization problem constrained with $$\ell _{2,1}$$ -norm and linear equalities. The minimization problem is efficiently solved by using an Augmented Lagrangian Alternating Direction Minimization method. Extensive experimental results on diverse multi-view datasets demonstrate the effectiveness of our algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI