自编码
降维
模式识别(心理学)
人工智能
计算机科学
稀疏逼近
特征学习
维数之咒
特征(语言学)
特征向量
还原(数学)
无监督学习
机器学习
深度学习
数学
哲学
语言学
几何学
作者
Jianran Liu,Chan Li,Wenyuan Yang
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2018-12-04
卷期号:6: 73802-73814
被引量:10
标识
DOI:10.1109/access.2018.2884697
摘要
Dimensionality reduction is commonly used to preprocess high-dimensional data, which is an essential step in machine learning and data mining. An outstanding low-dimensional feature can improve the efficiency of subsequent learning tasks. However, existing methods of dimensionality reduction mostly involve datasets with sufficient labels and fail to achieve effective feature vectors for datasets with insufficient labels. In this paper, an unsupervised multiple layered sparse autoencoder model is studied. Its advantage is that it reduces the reconstruction error as its optimization goal, with the resulting low-dimensional feature being reconstructed to the original dataset as much as possible. Therefore, the reduction of high-dimensional datasets to low-dimensional datasets is effective. First, the relationship among the reconstructed data, the number of iterations, and the number of hidden variables is explored. Second, the dimensionality reduction ability of the sparse autoencoder is proven. Several classical feature representation methods are compared with the sparse autoencoder on publicly available datasets, and the corresponding low-dimensional representations are placed into different supervised classifiers and the classification performances reported. Finally, by adjusting the parameters that might influence the classification performance, the parametric sensitivity of the sparse autoencoder is shown. The extensively low-dimensional feature classification experimental results demonstrated that the sparse autoencoder is more efficient and reliable than the other selected classical dimensional reduction algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI