计算机科学
无监督学习
特征选择
人工智能
嵌入
特征学习
机器学习
特征(语言学)
选择(遗传算法)
模式识别(心理学)
哲学
语言学
作者
Zebiao Hu,Jian Wang,Jacek Mańdziuk,Z. Y. Ren,Nikhil R. Pal
标识
DOI:10.1109/tcyb.2025.3546658
摘要
The majority of the unsupervised feature selection methods usually explore the first-order similarity of the data while ignoring the high-order similarity of the instances, which makes it easy to construct a suboptimal similarity graph. Furthermore, such methods, often are not suitable for performing feature selection due to their high complexity, especially when the dimensionality of the data is high. To address the above issues, a novel method, termed as unsupervised feature selection for high-order embedding learning and sparse learning (UFSHS), is proposed to select useful features. More concretely, UFSHS first takes advantage of the high-order similarity of the original input to construct an optimal similarity graph that accurately reveals the essential geometric structure of high-dimensional data. Furthermore, it constructs a unified framework, integrating high-order embedding learning and sparse learning, to learn an appropriate projection matrix with row sparsity, which helps to select an optimal subset of features. Moreover, we design a novel alternative optimization method that provides different optimization strategies according to the relationship between the number of instances and the dimensionality, respectively, which significantly reduces the computational complexity of the model. Even more amazingly, the proposed optimization strategy is shown to be applicable to ridge regression, broad learning systems and fuzzy systems. Extensive experiments are conducted on nine public datasets to illustrate the superiority and efficiency of our UFSHS.
科研通智能强力驱动
Strongly Powered by AbleSci AI