Evaluating the AWIPS-II Tracking Meteogram Tool at the Operations Proving Ground

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Thursday, 8 January 2015: 2:00 PM
232A-C (Phoenix Convention Center - West and North Buildings)
Kim J. Runk, NOAA/NWS, Kansas City, MO; and C. M. Gravelle

The National Weather Service (NWS) created an Operations Proving Ground (OPG) as part of the Weather Ready Nation (WRN) initiative. The OPG was conceived as a system that would complement the NOAA family of testbeds, and ultimately play an integral part in achieving desired improvements in the Research-to-Operations (R2O) process. The value of this concept was reinforced in a congressionally commissioned report by the National Association of Public Administrators (NAPA) in 2013.

The primary role of the OPG is to validate the “last mile” of NWS R2O via Operational Readiness Evaluations (OREs). These sessions are designed to ensure promising new capabilities, such as those emerging from NOAA-affiliated laboratories, testbeds, and development proving grounds, are evaluated by NWS forecasters in a realistic, but controlled, operational setting. In order to achieve endorsement for field implementation, a proposed capability must demonstrate some unique value to the forecast process, such as improving warning decisions, enhancing risk communication, augmenting situation awareness, etc., while posing minimal adverse impact on human factors, such as workflow, workload, and cognitive assimilation. The first formal ORE was conducted at the OPG in May 2014. The weeklong experiment focused on testing the usability and usefulness of the Tracking Meteogram (TM), an AWIPS-II application developed collaboratively by the NASA Short-Term Prediction Research and Transition (SPoRT) Center and the NWS Meteorological Development Lab (MDL). NWS operational meteorologists from four NWS Regions were invited to participate alongside subject matter experts, trainers, developers from both SPoRT and MDL, and OPG technical support staff.

During the week, forecasters were placed into a variety of decision making scenarios, which became increasingly complex as the week progressed. The scenarios were comprised of a mixture of archived cases and live weather data. Functionality issues raised by forecasters, either during the scenarios or in feedback sessions, were entered directly into the Virtual Laboratory Development Environment. In some cases, code modification was conducted “on the fly” by on-site developers to address minor bugs or to incorporate suggestions for new functionality. These changes were then installed on the OPG AWIPS-II system during breaks and integrated into the ensuing scenario for the next phase of operational testing.

This presentation will discuss the developmental path that led to the TM being accepted for evaluation, the process by which validation experiments were conducted, major outcomes of the ORE session, and lessons learned for inclusion in future evaluations.