2.4 Running Numerous WRF Simulations Simultaneously with High Throughput Computing

Tuesday, 8 January 2019: 9:15 AM
North 123 (Phoenix Convention Center - West and North Buildings)
Ryan Clare, Univ. of Wisconsin–Madison, Madison, WI; and A. R. Desai

High throughput computing allows the allocation of large numbers of jobs to separate servers to be executed simultaneously. Here we demonstrate the value of using the Weather Research and Forecasting model (WRF) in a high throughput computing framework provided by the University of Wisconsin Center for High Throughput Computing (CHTC)’s HTCondor. This allows the completion of hundreds of simulations in only as much time as it takes to complete the longest one (for this study, roughly ten hours). The potential uses of high throughput computing for large ensemble modelling projects are numerous and highly economical.

In order to execute WRF model runs on HTCondor, WRF must be imported, decompressed, and run in serial mode. In order to make adjustments to the files which initialize the WRF model runs, NCAR Command Language (NCL) must also be imported and decompressed. Scripts executing the jobs in each separate server must determine which job they are a part of and use that information to alter files and initialize accordingly so that each job runs a distinct simulation. The number of simulations run at once is limited only by available space on the submit node. We provide a protocol for executing general-purpose WRF model runs on the HTCondor system so that other researchers may easily be able to execute large ensemble modelling projects. We demonstrate the scientific value of large ensemble approaches in our work on altering land surface conditions under extratropical cyclone cases.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner