Wednesday, 25 January 2017: 2:00 PM
Conference Center: Chelan 2 (Washington State Convention Center )
In recent years there have been important strides in the use of accelerators such as Nvidia's GPUs and Intel's Xeon Phi for weather, ocean and climate modeling. This work is on the verge of coming out of the research world and into operations. In fact, at least one meteorological service has implemented an operational forecast model on GPUs. This paper will provide a historical perspective by describing work done in 1979 examining the potential for using “attached array processors” connected to mainframe computers to run the then current National Meteorological Center (NMC – a predecessor organization to NCEP) model – the “9-Layer Primitive Equation Model.” As in the present, the work was motivated by a desire to increase performance at lower cost than possible with legacy approaches. Today the use of accelerators is also driven by the desire to obtain more performance with less power. The issues that were identified in this work – including data transfer, memory requirements, software environments, etc. – still resonate today, so the question is: What has changed to enable the progress we are seeing? And what challenges visible then still challenge us today? The authors will address these questions by comparing today’s accelerator based systems to those available over 35 years ago. And also to show just how much more complex today’s models are than the state of the art in 1979!
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner