冗余(工程)
最小冗余特征选择
特征选择
计算机科学
人工智能
数据挖掘
特征(语言学)
机器学习
模式识别(心理学)
数据冗余
滤波器(信号处理)
相关性(法律)
选择(遗传算法)
前提
计算机视觉
法学
操作系统
语言学
哲学
政治学
作者
Mei Wang,Xinrong Tao,Fei Han
标识
DOI:10.1145/3446132.3446153
摘要
Feature selection has become an important research issue in the fields of pattern recognition, data mining and machine learning. When processing some high-dimensional data, traditional machine learning algorithms may not be able to get satisfactory results, while feature selection can filter features of high-dimensional data before model training, reduce the number of features, and thus reduce the impact of problems caused by high-dimensional data. Feature selection can simultaneously eliminate features that are less correlated with categories or redundant with selected features, so as to improve classification accuracy and learning and training efficiency of high-dimensional data tasks. However, existing methods may remove redundancy inadequately or excessively in some cases. Therefore, this paper proposes a criterion for the feature redundancy, and based on this criterion, designs an effective feature selection algorithm to remove redundant features on the premise of ensuring maximum relevance to the target variable. The effectiveness and efficiency of the proposed algorithm are verified by experimental comparison with other algorithms that can remove redundant features.
科研通智能强力驱动
Strongly Powered by AbleSci AI