Thursday, 14 January 2016: 2:30 PM
Room 344 ( New Orleans Ernest N. Morial Convention Center)
At the Intel® Parallel Computing Center located at the University of Oklahoma, our objective is to improve the performance of the community-based shallow water model, ADCIRC (http://adcirc.org) utilizing the Intel® Xeon Phi co-processors. ADCIRC is a 2D/3D coastal circulation and storm surge model. Motivation for this work stem from many recent real-world applications, one of which includes a software superstructure, called the ADCIRC Surge Guidance System (ASGS), which produces real-time predictions of flooding extents due to tropical and extra-tropical storms (http://nc-cera.renci.org). In order to produce these high-resolution, real-time simulations, ADCIRC must be computationally efficient. The current ADCIRC model runs in parallel through MPI, and it achieves on most computer architectures a MPI efficiency of 90%. However, above this threshold ADCIRC's parallel efficiency degrades due to MPI communication overhead. Because MPI ranks have separate memory address spaces, input data cannot be shared within a node, resulting in a larger memory requirement within a compute unit. In addition to this, we incur more last-level cache displacement. Thus, in order to further increase scalability, MPI communication must be reduced. Shared memory within a node could be exploited to potentially reduce the number of MPI messages being passed; the reduction would be by a factor equal to the number of cores within a node. In this presentation, we will discuss the implementation of a hybrid MPI-OpenMP model and the utilization of the Intel® Xeon Phi co-processors.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner