8.1 Assessing Ensemble Forecast Performance for Select Members Available in the Community Leveraged Unified Ensemble (CLUE) during 2017 Hazardous Weather Testbed Spring Forecasting Experiment (HWT-SFE)

Tuesday, 25 July 2017: 3:30 PM
Coral Reef Harbor (Crowne Plaza San Diego)
Jamie K. Wolff, NCAR, Boulder, CO; and M. Harrold, I. Jankov, and J. Beck

During the annual NOAA Hazardous Weather Testbed (HWT), a plethora of experimental model data is produced to support the five weeklong experiment conducted each spring. Starting during the 2016 HWT Spring Forecasting Experiment (SFE) and continuing for 2017, an effort to coordinate the contributed model output from participating modeling groups around a unified setup (e.g., WRF versions, domain size, vertical levels and spacing, etc.) was undertaken to create a super-ensemble of over 60 members called the Community Leveraged Unified Ensemble (CLUE).

While this data is subjectively assessed daily during the experiments, there is often times a lack of extensive objective verification after the experiment to thoroughly investigate the contributed model configuration strengths and weaknesses. The large datasets produced during the HWT-SFE provide an excellent opportunity to help identify and begin to answer the most pressing scientific questions that need to be addressed. In particular, many questions remain regarding the best approach to constructing a convection-allowing model (CAM) ensemble system. For example, should model uncertainty be addressed through multiple dynamic cores, multiple physics parameterizations, stochastic physics, or some combination of these? The careful coordination and construction of CLUE will provide the datasets necessary to begin to explore this question.

The forecast methods targeted for this presentation will include examining single physics/core vs. multi-physics and/or multi-core approaches. Ultimately, the probabilistic forecast performance of each targeted ensemble subset will be examined. Individual deterministic forecasts from select members will also be to assessed to understand their contribution to the overall ensemble spread. The objective evaluation will be conducted using the Model Evaluation Tools (MET) software system. The metrics used for probabilistic and deterministic evaluation will range from traditional metrics widely used in the community (spread, skill, error, reliability, etc.) to newer methods that provide additional diagnostic information such as the Method for Object-based Diagnostic Evaluation (MODE), neighborhood methods applied to deterministic and probabilistic output (e.g., Fractions Skill Score), and a new method available in MET that helps evaluate forecast consistency measures amongst CLUE members and the resulting products.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner