等距映射
非线性降维
降维
扩散图
多维标度
人工智能
主成分分析
探索性数据分析
数据点
计算机科学
模式识别(心理学)
维数之咒
机器学习
数学
歧管(流体力学)
歧管对齐
数据挖掘
工程类
机械工程
摘要
Abstract A popular research area today in statistics and machine learning is that of manifold learning, which is related to the algorithmic techniques of dimensionality reduction. Manifold learning can be divided into linear and nonlinear methods. Linear methods, which have long been part of the statistician's toolbox for analyzing multivariate data, include principal component analysis (PCA) and multidimensional scaling (MDS). Recently, there has been a flurry of research activity on nonlinear manifold learning, which includes Isomap, local linear embedding, Laplacian eigenmaps, Hessian eigenmaps, and diffusion maps. Some of these techniques are nonlinear generalizations of the linear methods. The algorithmic process of most of these techniques consists of three steps: a nearest‐neighbor search, a definition of distances or affinities between points (a key ingredient for the success of these methods), and an eigenproblem for embedding high‐dimensional points into a lower dimensional space. This article gives us a brief survey of these new methods and indicates their strengths and weaknesses. WIREs Comput Stat 2012 doi: 10.1002/wics.1222 This article is categorized under: Statistical and Graphical Methods of Data Analysis > Dimension Reduction Statistical Learning and Exploratory Methods of the Data Sciences > Manifold Learning Statistical and Graphical Methods of Data Analysis > Multivariate Analysis
科研通智能强力驱动
Strongly Powered by AbleSci AI