随机森林
范畴变量
计算机科学
结果(博弈论)
数据挖掘
机器学习
特征选择
人工智能
统计
数学
数理经济学
作者
Andreas Ziegler,Inke R. König
摘要
Random Forests are fast, flexible, and represent a robust approach to mining high‐dimensional data. They are an extension of classification and regression trees ( CART ). They perform well even in the presence of a large number of features and a small number of observations. In analogy to CART , random forests can deal with continuous outcome, categorical outcome, and time‐to‐event outcome with censoring. The tree‐building process of random forests implicitly allows for interaction between features and high correlation between features. Approaches are available to measuring variable importance and reducing the number of features. Although random forests perform well in many applications, their theoretical properties are not fully understood. Recently, several articles have provided a better understanding of random forests, and we summarize these findings. We survey different versions of random forests, including random forests for classification, random forests for probability estimation, and random forests for estimating survival data. We discuss the consequences of (1) no selection, (2) random selection, and (3) a combination of deterministic and random selection of features for random forests. Finally, we review a backward elimination and a forward procedure, the determination of trees representing a forest, and the identification of important variables in a random forest. Finally, we provide a brief overview of different areas of application of random forests. WIREs Data Mining Knowl Discov 2014, 4:55–63. doi: 10.1002/widm.1114 This article is categorized under: Algorithmic Development > Statistics Application Areas > Data Mining Software Tools Technologies > Classification Technologies > Machine Learning
科研通智能强力驱动
Strongly Powered by AbleSci AI