This mesoscale analysis quality issue directly affects downstream systems such as the National Blend of Models (NBM) that rely heavily on URMA as its calibration and validation data source. In fact, the quality of NBM depends greatly on what observation/analysis dataset is used, which makes NBM evaluation challenging and ad hoc due to the lack of reliable reference. Moreover, observations from such networks as Mesonet have serious quality issues – while data are abundant – at some stations due to its ad hoc methods for quality control such as flagging of bad station data based on episodic reports over complex terrain. For systematically reducing the bias, those algorithms need to be based on science-based approaches that physically take into account highly varying physical characteristics of flow over complex terrain.
A good example that reveals the degraded quality of URMA affecting the performance of NBM can be found from the March 2018 Atmospheric River event. NBM (v3.0, v3.1) clearly revealed its deficiency in QPF over steep terrain compared with National Digital Forecast Database (NDFD) and Weather Prediction Center (WPC) forecast products, which is believed to be due to the reduced accuracy of URMA over steep terrain.
The degraded quality of URMA can originate from biases in two kinds of upstream systems – forecast model and observation data quality control / bias correction system. In order to systematically alleviate the biases of URMA, the systematic biases present in these upstream systems must be fundamentally reduced utilizing science-based methods, rather than ad hoc ones.
In this work, we present our rationale behind and efforts on how to scientifically develop modeling requirements based on the needs collected from the field forecasters, utilizing a recent atmospheric river event. This talk will serve as an introduction to the Sessions on the Analysis and Nowcast (a.k.a. 0-18 Hour Forecasting) of the Ninth Conference on the Transition of Research to Operations.