The 2016 Spring Forecasting Experiment (SFE2016), was conducted 2 May – 3 June with participation from more than 80 forecasters, researchers, and model developers from around the world. Building upon successful experiments of previous years, a main emphasis of SFE2016 was the generation of probabilistic forecasts of severe weather valid over shorter time periods than current SPC operational products. This is an important step toward addressing a strategy within the National Weather Service of providing nearly continuous probabilistic hazard forecasts on increasingly fine spatial and temporal scales, consistent with the NOAA Forecasting a Continuum of Environmental Threats (FACETs) vision. As in previous experiments, a suite of new and improved experimental convection-allowing model (CAM) guidance contributed by our large group of collaborators was central to the generation of these forecasts. However, this year a major effort was made to coordinate CAM-based ensemble configurations much more closely than in previous years. Specifically, instead of each group providing a separate, independently designed CAM-based ensemble, all groups agreed on a set of model specifications (e.g., grid-spacing, vertical levels, domain size, physics) so that the simulations contributed by each group could be used in carefully designed controlled experiments. This design allowed us to conduct several experiments geared toward identifying optimal configuration strategies for CAM-based ensembles, and is especially well timed to help inform the design of the first operational CAM-based ensemble for the US, which is planned for implementation by NOAA’s NCEP/Environmental Modeling Center (EMC) in the upcoming years. This large number of CAM members has been termed the Community Leveraged Unified Ensemble, or CLUE, and included 65 members using 3-km grid-spacing that allowed a set of eight unique experiments, which are listed below:
1) ARW vs. NMMB: A direct comparison of subjective and objective skill of ARW and NMMB dynamic cores was conducted.
2) Multi-core vs. single core ensemble design: Three ensembles were compared to test the effectiveness of a single core vs. multi-core configuration. The first ensemble used 5 ARW and 5 NMMB members, the second used 10 ARW members, and the third used 10 NMMB members.
3) Single Physics vs. Multi-Physics: An ensemble with perturbed initial and lateral boundary conditions tested whether there is a noticeable advantage when using multiple PBL and microphysics parameterizations vs. common physics in all members.
4) Ensemble Radar vs. Ensemble No Radar: A single physics ensemble tested the influence of assimilating radar data. An important question is whether the radar data influence extends to longer forecast lengths relative to deterministic forecasts.
5) 3DVAR vs. EnKF: These two methods for data assimilation were compared in three different subsets of the CLUE.
6) GSD Radar vs. CAPS Radar Assimilation: This experiment tested two methods for assimilating radar data.
7) Microphysics Sensitivities: The impact from different microphysical parameterizations on the resulting convective storm forecasts was examined.
8) Ensemble Size Experiment: A comparison of the mixed-core ensembles with equal contributions of NMMB and ARW members using 6, 10, and 20 members was conducted to examine the impact of ensemble size.
In this talk, the design of the CLUE will be described and results from the eight different experiments will be presented, with a focus on the NMMB vs. ARW comparisons.