177
Operational Scoring of Forecasts of Convective Weather Impacts in the Route Availability Planning Tool (RAPT)
An evaluation of forecasts accuracy addresses two crucial and related issues. The first issue is the sensitivity of RAPT to small details in the spatial organization and intensity of the weather forecast. If RAPT is overly sensitive to small changes in the weather, RAPT guidance may show excessive volatility as forecasts are updated, greatly increasing uncertainty about route blockage and reducing the value of RAPT guidance. The second issue is the predictability of RAPT performance. For RAPT, forecast validation requires a significant time lag (75 minutes) to collect the observed weather needed to calculate “true” forecast accuracy scores. Can such time-lagged scores be used to predict future performance? If not, what can be done to create a reasonable score approximation that does not require the time lag needed for “true” forecast validation?
Starting in 2009, RAPT began providing a heuristic measurement of forecast accuracy to help traffic managers assess the quality of RAPT route blockage forecasts. The algorithm generates a Modified Critical Success Index (mod-CSI) score via the comparison of “predictive” versus “true” grids of convective weather blockage. These grids are separated by a 40 minute lag, and are comprised of segmented blockage relative to the route direction. In the New York airspace, scores are calculated relative to the of the North, West, and South departure gate orientations. Multiple variations on the mod-CSI algorithm exist, and may potentially create an operational forecast score approximation.
In this paper, we will analyze RAPT blockage data from the 2009 convective season to answer these questions. The “true” RAPT blockage data will be created to determine the robustness of the RAPT blockage algorithm. Using this baseline, questions of RAPT stability and predictability relative to forecast accuracy will be addressed. Variations on the mod-CSI algorithm will be tested as means to provide an alternative to “true” forecast validation. Results of the analysis will be presented, and potential operational usage will be discussed.