选择(遗传算法)
计算机科学
特征选择
人工智能
特征(语言学)
模式识别(心理学)
语言学
哲学
作者
Yu‐Feng Wang,Yanyan Yin,Hang Zhao,Jinxuan Liu,Chunyu Xu,Wenyong Dong
标识
DOI:10.1038/s41598-025-97224-8
摘要
Feature selection is one of the most critical steps in big data analysis. Accurately extracting correct features from massive data can effectively improve the accuracy of big data processing algorithms. However, traditional grey wolf optimizer (GWO) algorithms often suffer from slow convergence and a tendency to fall into local optima, limiting their effectiveness in high-dimensional feature selection tasks. To address these limitations, we propose a novel feature selection algorithm called grey wolf optimizer with self-repulsion strategy (GWO-SRS). In GWO-SRS, the hierarchical structure of the wolf pack is flattened to enable rapid transmission of commands from the alpha wolf to each member, thereby accelerating convergence. Additionally, two distinct learning strategies are employed: the self-repulsion learning strategy for the alpha wolf and the pack learning strategy based on the predatory behavior of the alpha wolf, facilitating rapid self-learning for both the alpha wolf and the pack. These improvements effectively mitigate the weaknesses of traditional GWO, such as premature convergence and limited exploration capability. Finally, we conduct a comparative experimental analysis on the UCI test dataset using five relevant feature selection algorithms. The results demonstrate that the average classification error of GWO-SRS is reduced by approximately 15% compared to related algorithms, while utilizing 20% fewer features. This work highlights the need to address the inherent limitations of GWO and provides a robust solution to complex feature selection problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI