协变量
数学
普通最小二乘法
协方差
统计
差异(会计)
对比度(视觉)
随机性
协方差分析
线性回归
应用数学
计算机科学
会计
业务
人工智能
作者
Saharon Rosset,Ryan J. Tibshirani
标识
DOI:10.1080/01621459.2018.1424632
摘要
In statistical prediction, classical approaches for model selection and model evaluation based on covariance penalties are still widely used. Most of the literature on this topic is based on what we call the "Fixed-X" assumption, where covariate values are assumed to be nonrandom. By contrast, it is often more reasonable to take a "Random-X" view, where the covariate values are independently drawn for both training and prediction. To study the applicability of covariance penalties in this setting, we propose a decomposition of Random-X prediction error in which the randomness in the covariates contributes to both the bias and variance components. This decomposition is general, but we concentrate on the fundamental case of ordinary least-squares (OLS) regression. We prove that in this setting the move from Fixed-X to Random-X prediction results in an increase in both bias and variance. When the covariates are normally distributed and the linear model is unbiased, all terms in this decomposition are explicitly computable, which yields an extension of Mallows' Cp that we call RCp. RCp also holds asymptotically for certain classes of nonnormal covariates. When the noise variance is unknown, plugging in the usual unbiased estimate leads to an approach that we call RCp ^, which is closely related to Sp, and generalized cross-validation (GCV). For excess bias, we propose an estimate based on the "shortcut-formula" for ordinary cross-validation (OCV), resulting in an approach we call RCp+. Theoretical arguments and numerical simulations suggest that RCp+ is typically superior to OCV, though the difference is small. We further examine the Random-X error of other popular estimators. The surprising result we get for ridge regression is that, in the heavily regularized regime, Random-X variance is smaller than Fixed-X variance, which can lead to smaller overall Random-X error. Supplementary materials for this article are available online.
科研通智能强力驱动
Strongly Powered by AbleSci AI