Thursday, 13 September 2007: 3:30 PM
Kon Tiki Ballroom (Catamaran Resort Hotel)
Eric R. Pardyjak, University of Utah, Salt Lake City, UT; and B. Singh, A. Norgren, and P. Willemsen
Presentation PDF
(284.5 kB)
As a result of the demand for high performance graphics capabilities driven by the computer video game industry, the processing performance of video cards is rapidly evolving. Recent trends in computing have shifted toward multi-core processors and programmable graphics processors with highly parallel data paths for processing geometry and pixels. Multi-core machines are now readily available with 2 cores, but machines with 4, 8, and even 16 cores are projected for the near future. Data parallelism in modern graphics cards is also increasing with raw performance of graphics processing units (GPUs) surpassing performance of CPUs. While initially specialized for processing computer graphics, GPUs can be programmed for general-purpose computations. As a result, GPUs have become useful computational tools providing inexpensive highly parallel data paths to accelerate a wide range of scientific and simulation applications.
One area of simulation that could greatly benefit from inexpensive parallelization is emergency response transport and dispersion modeling in urban areas. In a previous paper, we implemented a simple Lagrangian dispersion model based on the Quick Urban and Industrial Complex (QUIC) dispersion modeling system on the GPU for a simple continuous point source release in a uniform flow. The GPU simulations (which utilize upwards of 128 pixel processing components on the GPU) outperformed the CPU simulations by three orders of magnitude. Another important benefit of GPU-based dispersion simulation is real-time visualization of the dispersion field since all data necessary for visualization is already on the GPU. For the present paper, we utilize the QUIC dispersion modeling framework to extend our GPU simulations to a fully urbanized domain with multiple explicitly resolved buildings. We compare GPU simulation results to the standard QUIC CPU results, highlight performance gains and discuss challenges associated with implementation of Lagrangian dispersion models onto the GPU.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner