Thursday, 15 January 2004: 8:30 AM
Cross-validation for selection of statistical models
Room 602/603
Cross-validation is used to estimate predictive ability of statistical models and can also be used for model selection where predictors are chosen. The simplest cross-validation method is leave-one-out cross-validation (LOOCV). Asymptotically, as the number of observations becomes large, LOOCV is equivalent to the Akaike information criterion (AIC), Mallows' Cp, the jackknife and the bootstrap model selection methods. However, under certain assumptions these methods are asymptotically biased, in the sense that they tend to select models with too many predictors. Selection of incorrect models is associated with over-estimation of model skill since a scenario for incorrect model selection is when, by chance, the incorrect model appears to have more skill than the correct one. The skill of the incorrect model will be poorer than estimated on future data. The Bayesian Information Criterion (BIC) is similar to the AIC but having a stronger penalty for large models, tends to select models with fewer predictors than the AIC and avoids the bias of the AIC. Leave-k-out cross-validation (LKOCV) is asymptotically equivalent to the BIC for appropriately chosen k. LKOCV gives more emphasis to prediction error related to model size, i.e. error related to estimation of model parameters, and selects simpler models. In the limit of large sample size, the likelihood of selecting the correct model is an increasing function of the number left out and depends only on the number of superfluous variables observed. However, there are caveats to the use of LKOCV. LKOCV is a negatively biased estimator of prediction error, and the bias is an increasing function of the number left out. When the dimension of all the candidate models is the same, then LKOCV results in larger prediction error variance and reduced likelihood of selecting the correct model. These issues are illustrated in simulated and climate examples.
Supplementary URL: