While this data is subjectively assessed daily during the experiments, there is often times a lack of extensive objective verification after the experiment to thoroughly investigate the contributed model configuration strengths and weaknesses. The large datasets produced during the HWT-SFE provide an excellent opportunity to help identify and begin to answer the most pressing scientific questions that need to be addressed. In particular, many questions remain regarding the best approach to constructing a convection-allowing model (CAM) ensemble system. For example, should model uncertainty be addressed through multiple dynamic cores, multiple physics parameterizations, stochastic physics, or some combination of these? The careful coordination and construction of CLUE will provide the datasets necessary to begin to explore this question.
The forecast methods targeted for this presentation will include examining single physics/core vs. multi-physics and/or multi-core approaches. Ultimately, the probabilistic forecast performance of each targeted ensemble subset will be examined. Individual deterministic forecasts from select members will also be to assessed to understand their contribution to the overall ensemble spread. The objective evaluation will be conducted using the Model Evaluation Tools (MET) software system. The metrics used for probabilistic and deterministic evaluation will range from traditional metrics widely used in the community (spread, skill, error, reliability, etc.) to newer methods that provide additional diagnostic information such as the Method for Object-based Diagnostic Evaluation (MODE), neighborhood methods applied to deterministic and probabilistic output (e.g., Fractions Skill Score), and a new method available in MET that helps evaluate forecast consistency measures amongst CLUE members and the resulting products.