Wednesday, 10 May 2000: 1:30 PM
Dynamic statistical models (DSM) of a variety of weather elements are used for guidance by forecasters in several countries. For a categorical predictand such as sky cover a classification algorithm is used. A well-known method for this is Multiple Discriminant Analysis (MDA). This is a parametric additive algorithm that finds linear combinations of the predictors (discriminant functions) that separate the predictand by its categories. Recent advances in non-parametric non-linear multivariate data modeling methods have the potential for greater accuracy in separation of predictands with complex predictor-predictand relationships. During the course of developing new statistical models for categorical predictands for implementation at the Canadian Meteorological Center (CMC) we had to choose a modeling method. We shall show comparison results of a stepwise MDA procedure with a recent non-parametric algorithm, Classification and Regression Trees (CART) (Brieman et al., 1984) for modeling sky cover.
Our test data consisted of sky-cover in 4 categories matched with 175 predictors generated from CMCs GEM operational NWP model forecasts interpolated to several stations. Several hundred days data were used from warm and cold periods during which the NWP model was unchanged. Runs were done every 3 hours from 0-48 hours when persistence was not a predictor and 0-24 hours when persistence was a predictor. CART is a powerful method which develops an optimal decision-tree data-partitioning structure that minimizes residual variance of the predictand by clustering the data into a set of "terminal nodes". The CART "prediction" of the predictand in a node is the mean of the predictand values in the cases that fell into the node. We developed a stepwise multi-pass MDA procedure to compare with CART. In the first pass predictors are offered 1 at a time. After the first pass completes, the predictor which gave the maximum Mahalanobis distance for each of the 6 possible category separations is saved, giving a surviving set of 1-6 predictors. In the second pass each predictor plus the set of surviving predictors from the first pass is offered sequentially. The new set which gave the maximum Mahalanobis distance for each of the 6 separations is saved, giving a new enhanced predictor set. Data passes are repeated until the number of predictors equals the number needed by CART. Usually this was only 2 or 3 passes. To our surprise, rank-probability scores and probability of detection scores with independent data consistently showed our MDA procedure to be slightly more accurate than CART for this problem.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner