粗集
计算机科学
降维
数据挖掘
度量(数据仓库)
边界(拓扑)
还原(数学)
依赖关系(UML)
公制(单位)
模式识别(心理学)
特征选择
人工智能
算法
数学
数学分析
几何学
运营管理
经济
作者
Neil Mac Parthaláin,Qiang Shen,Richard Jensen
标识
DOI:10.1109/tkde.2009.119
摘要
Feature Selection (FS) or Attribute Reduction techniques are employed for dimensionality reduction and aim to select a subset of the original features of a data set which are rich in the most useful information. The benefits of employing FS techniques include improved data visualization and transparency, a reduction in training and utilization times and potentially, improved prediction performance. Many approaches based on rough set theory up to now, have employed the dependency function, which is based on lower approximations as an evaluation step in the FS process. However, by examining only that information which is considered to be certain and ignoring the boundary region, or region of uncertainty, much useful information is lost. This paper examines a rough set FS technique which uses the information gathered from both the lower approximation dependency value and a distance metric which considers the number of objects in the boundary region and the distance of those objects from the lower approximation. The use of this measure in rough set feature selection can result in smaller subset sizes than those obtained using the dependency function alone. This demonstrates that there is much valuable information to be extracted from the boundary region. Experimental results are presented for both crisp and real-valued data and compared with two other FS techniques in terms of subset size, runtimes, and classification accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI