Wednesday, 13 January 2016
The role of the NOAA/NESDIS/STAR Algorithm Scientific Software Integration and System Transition Team (ASSISTT) is to receive offline research-grade code from science teams for conversion into maintainable operational-quality code to ultimately be run in operations at NOAA/NESDIS/OSPO. Depending upon the project, the ASSISTT delivers a final Delivered Algorithm Package (DAP) either directly to OSPO or to a third party for integration into a larger processing system that will later run in OSPO. Unfortunately, this approach assumes that most long-term algorithm system testing with live data occurs only after delivery to system integrators. A major problem with limited system testing prior to DAP delivery is that the algorithms are not fully exercised over the possible range of operational data variability which may include variations in data quality, latency, granularity, data gaps, changing instrument and platform modes, and corrupt data. In addition, offline testing is often performed using synthetic or test data sets whose format and content are not quite identical to that available in operations. Therefore, the delivered algorithms are not as robust to data related issues as they could be and their products are not easily compared to those generated in system tests after delivery. In addition, system testing after DAP delivery is typically conducted by integrators in a test environment to which the algorithm developers do not have access. If software problems arise during system testing, the process of diagnosis and update can be slow because data and software have to be passed back to the algorithm teams. To reduce these post-delivery algorithm testing issues and to smooth the R2O process, the STAR ASSISTT has developed a methodology for full system algorithm testing prior to delivery. This approach involves building and operating near real-time processing systems in the development environment that mimic, as much as possible the interfaces, functionality, product precedence, hardware, data handling, data format and flow volumes of the target operational systems and environment. This requires a full understanding of the operational data and the algorithm-to-system interfaces. Each system is uniquely designed to match the testing requirements of each project, but they all utilize a common tool set and design methodology. This capability enables both the ASSISTT and the science algorithm developers to work within the same system environment to debug any algorithm software issues that are found during implementation and testing. These systems also allow the algorithm teams to provide more accurate estimates of algorithm resource usage and latency and they provide science teams and end users with data to assist with product validation and user readiness. This presentation will discuss several projects to which this methodology was applied and the crucial roles they played to facilitate the R2O process.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner