自编码
雅可比矩阵与行列式
约束(计算机辅助设计)
数学
歧管(流体力学)
降维
秩(图论)
维数(图论)
维数之咒
非线性降维
内在维度
操作员(生物学)
人工智能
算法
模式识别(心理学)
应用数学
深度学习
计算机科学
组合数学
几何学
机械工程
生物化学
化学
抑制因子
转录因子
工程类
基因
作者
Rustem Takhanov,Sultan Abylkairov,Maxat Tezekbayev
标识
DOI:10.1016/j.patcog.2023.109777
摘要
We formulate the manifold learning problem as the problem of finding an operator that maps any point to a close neighbor that lies on a “hidden” k-dimensional manifold. We call this operator the correcting function. Under this formulation, autoencoders can be viewed as a tool to approximate the correcting function. Given an autoencoder whose Jacobian has rank k, we deduce from the classical Constant Rank Theorem that its range has a structure of a k-dimensional manifold. A k-dimensionality of the range can be forced by the architecture of an autoencoder (by fixing the dimension of the code space), or alternatively, by an additional constraint that the rank of the autoencoder mapping is not greater than k. This constraint is included in the objective function as a new term, namely a squared Ky-Fan k-antinorm of the Jacobian function. We claim that this constraint is a factor that effectively reduces the dimension of the range of an autoencoder, additionally to the reduction defined by the architecture. We also add a new curvature term into the objective. To conclude, we experimentally compare our approach with the CAE+H method on synthetic and real-world datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI