J43.5 Using Evolutionary Programming to Generate Improved Tropical Cyclone Intensity Forecasts

Wednesday, 15 January 2020: 11:30 AM
156BC (Boston Convention and Exhibition Center)
Jesse Schaffer, Univ. of Wisconsin−Milwaukee, Milwaukee, WI; and P. Roebber and C. Evans

In this study, the machine-learning technique of evolutionary programming (EP) is applied to Statistical Hurricane Intensity Prediction Scheme (SHIPS) predictors to develop two classes of models for both the North Atlantic and eastern/central North Pacific basins: a deterministic model for TC intensity out to 120 h and a probabilistic model for rapid intensification (RI) and rapid weakening (RW) out to 72 h. Through the application of selective pressure via a specified performance criterion, EP mimics the evolutionary principles of genetic information, reproduction, and mutation to develop a population of algorithms containing skillful predictor combinations.

The EP process starts with 10,000 randomly initialized algorithms that forecast a 12-h intensity change (0-12 h, 12-24 h, etc.) from a persistence forecast using eight chosen predictors from the SHIPS perfect-prog developmental database. These algorithms then forecast across a set of training cases to determine their skill, with the 2,000 worst performing algorithms based on root-mean square error (RMSE) being eliminated. Noise (here, random perturbations with small magnitude relative to the average difference between analysis and predictor variables in the SHIPS datasets) is added to the predictor values during the testing phase to prevent over fitting of the re-analysis variables used in this perfect-prognostic approach. Next, cloning, mutation, and genetic exchange is used to generate the next generation of algorithms, which then forecast across a set of cross-validation cases. The performance here determines whether an algorithm makes it onto the 100 best-algorithms list, which is constantly updated as this process iterates through a total of five populations and 300 generations each. In this way, the best algorithms are selected no matter when they occur during the training process.

Several processing steps are then applied to the algorithms to obtain the final deterministic and probabilistic forecast models. After bias-correcting each algorithm, Bayesian model combination is used to assign weights to a subset of ten algorithms from the 100 best algorithms list. This subset is chosen to minimize mean-absolute error (i.e., skillful) and maximize mean-absolute difference (i.e., diverse) between the selected algorithms. The resulting weighting of the ten bias-corrected algorithms in each basin becomes the final form of the deterministic intensity model, which is linked to an inland wind decay model for operational applications to landfalling TCs. For the probabilistic forecasts, a normal curve is centered on the forecast from each individual algorithm and then their weighted sum is used to construct a probability density function from which the RI and RW thresholds are taken.

The performance of the deterministic EP models for each basin is evaluated using independent testing data from the 2010-18 seasons. The best performance is in the North Atlantic at all lead times prior to 96 h, where the deterministic EP model is 5-19% more skillful than the “no skill” Decay Statistical Hurricane Intensity Forecast climatology and persistence model (OCD5). Conversely, in the east/central North Pacific basin, the deterministic EP model is only more skillful than OCD5 at 12-h lead (+13%), with comparable (+1%) to lesser (-1 to -15%) forecast skill at all remaining lead times. Probabilistic performance for RI/RW cases is currently being evaluated, and the presentation will include these results. Lastly, specific case studies will be presented to give insight into model behavior and to give a greater understanding of its strengths and weaknesses.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner