13.2
Recent Advances in High-Resolution Operational NWP, Utilizing WRF-ARW

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Thursday, 6 February 2014: 11:15 AM
Room C202 (The Georgia World Congress Center )
James P. Cipriani, IBM Thomas J. Watson Research Center, Yorktown Heights, NY; and L. A. Treinish and A. P. Praino

Numerical Weather Prediction (NWP) is essentially an initial and boundary value problem, and is utilized in order to solve the fundamental equations of the atmosphere on a three-dimensional grid. The quality of gridded and observational data being ingested at model start time, not to mention choices of physics options and geographical extent, can have significant impacts on the quality of the overall solution. More often than not, in the case of high-resolution modeling, typical (“default”) datasets are too coarse, sparsely populated or of poor quality to positively affect the outcome of the model. Operational, high-resolution modeling has further constraints (e.g. more stringent CFL criteria for horizontal and vertical stability, tradeoffs between computational performance, scalability and precision, etc.), which need to be optimized in a production environment.

IBM has developed a state-of-the-art high spatial (~ 1-2 km horizontally) and temporal (~10 minute) resolution weather impact forecasting capability, known as Deep Thunder, which is customized for particular geographies and client requirements. It is deployed operationally on IBM HPC systems (e.g., clusters of Power 7 SMP, x86 nodes, or Blue Gene systems), typically utilizing the community WRF-ARW model as the underlying NWP component in a nested approach. This component often drives one or more coupled models to support forecast dissemination and integration into a business decision process that utilizes predictions of the impacts of specific weather events to mitigate client response time and optimize restoration efforts (e.g., probabilistic outage prediction, flooding, etc.).

For example, Deep Thunder has been running 84-hour forecasts operationally at 2-km horizontal resolution (updated twice daily), covering the New York City metropolitan area, Westchester County, and lower Dutchess (NY) and Fairfield (CT) counties since early 2009. Other production capabilities include the city of Rio de Janeiro (1-km resolution), the country of Brunei (1.5-km resolution), specific business applications in the northeastern and southeastern United States (1.5km resolution), the Detroit metropolitan region (1-km resolution), and the Canary Islands (668-m resolution). In addition, there have been a number of experimental deployments in Europe, Asia and Australia.

Our initial operational use of WRF-ARW was with version 3.1.1, although we did many experimental deployments with the earlier versions complementary to our use of other NWP codes such as RAMS and MM5. For applications in North America, we employed initial and boundary conditions from NCEP models (output from NAM or GFS as well as daily SSTs). In addition, we used the “default” data for topography, land use, etc. that are provided to the WRF community. Given both advances in the community and our own customization, we have refined our approach and are able to easily leverage new, high quality data sets as input to the model for a variety of deployments. Currently, these efforts are built upon WRF-ARW version 3.4.1. A typical (i.e., newly updated) iteration for a U.S.-based geography could include the following: (a) blending of initial and boundary conditions from different regional NOAA models; (b) assimilation of thousands of surface and upper air observations for all domains, available from WeatherBug (EarthNetworks, an IBM partner), NOAA and other government agencies, and that may be provided by clients; (c) high-resolution sea surface temperature analyses (NASA) used as input for the initial conditions; and (d) static high-resolution topography (30-90 m), land use (1-km), and green vegetation fraction (1 km) data for better surface representation. For geographies outside the U.S., the input data could be somewhat limited: (a) assimilation of observations, depending on availability from the client, (b) high-resolution sea surface temperatures, and (c) high-resolution topography (90 m) and green vegetation fraction (1-km).

We also employ advanced, custom pre- and post-processing techniques and verification measures, all of which are tailored to the client's applications. This can range from highly customized visualizations that work directly with the native or post-processed WRF output to specialized metrics to evaluate the value of the forecasts, and diagnostic data to improve the utilization of forecasts for business applications. The verification work does leverage some of the capabilities of the Model Evaluation Tools (METv4.0) package. The choices of physics options also differ for each domain, based on the applications. Typically, to converge on an optimal model configuration for consistent, year-around operations that are computationally tractable, retrospective analysis is done on key historical weather events defined by the client. Numerical experiments as hindcasts are performed to evaluate the model, whose results are compared to both conventional observations as well as weather impacts that the client may identify. In general, we attempt to apply the most sophisticated options relevant to the client's weather sensitivity. Since they will have implications on model execution time, we address various optimization practices, ranging from parallelization of all steps of the forecast production process to adaptive time-stepping techniques.

From a modeling standpoint, with the inclusion of these new features, we have been able to achieve significant improvements in both the lead time on impactful events and the reduction of forecast error. We will discuss the on-going work, previous and current methodologies, and the lessons learned through the research effort.