2002 Annual

Thursday, 17 January 2002: 1:44 PM
Breaking the Billion Zone Barrier—Simulation, Data Handling and Visualization: An Example
Robert Wilhelmson, Univ. of Illinois and National Center for Supercomputing Applications, Champaign, IL; and P. Woodward, S. Anderson, D. Porter, S. Peckham, and C. Shaw
Poster PDF (24.2 kB)
Comments to Organizer: My complete comments follow - I could not get them into the space provided.

The objective of this contribution is to demonstrate that the national computational grid can effectively be used to carry out research that involves distributed access to computational grid facilities such as those at NCSA and LCSE for carrying out simulations, for resultant data analysis, and for visual display. As such, it crosses over several topical areas in IIPS that include advancements/applications in hydrology, …, data and information handling, distributed data and metadata access, internet/Web opportunities and challenges, and visualization. I believe it is important for IIPS to include presentations on high end use of the national computational grid for projects such as this. The grid will increasingly enable new research collaborations, the ability to integrate modeling and observations, and the opportunity to carry out distributed modeling, analysis, and visualization tasks. Further, it is important that the ever increasing repository of data from laboratory, models, and observations be available over the grid for new research endeavors. I have also submitted another abstract to the IT Forecast topical area that deals with plans for the Teragrid which will significantly improve the infrastructure for dealing with information generation and processing of large volumes of data in a distributed environment with very high speed networking.

Now for the Abstract:

It is now possible to carry out simulations of fluid flow where the integration domain consists of a billion of more zones. This has been done for turbulent flow by the co-authors at LSCE (Laboratory for Computational Science and Engineering). The use of a billion zones per field is a full order of magnitude beyond the largest mesoscale meteorology computations of which we are aware and poses new challenges in the handling and display of huge volumes of model results. Currently an atmospheric model, NCOMMAS, is being adapted to carry out such computations on parallel cluster computers composed of commodity processors and high speed interconnects. The strategy for accomplishing such calculations was developed at LSCE and has been characterized as cluster programming with shared memory on disk. The model transformation will be complete in several months and can be used on a single computing platform or adapted for use of multiple platforms on the national computational grid to carry out a single simulation.

Our focus is not only on algorithm and model data flow changes needed to carry out such NCOMMAS simulations but also on the processing of the massive amounts of data/information produced. Various ways of processing this data will be discussed including the use of parallel algorithms for both analysis and visualization. Further, the presentation will provide a perspective on the use of the national computational grid in interactive exploration of the model data. This will include the viability of data streaming from the compute site(s) to the LCSE where a full range of LCSE data analysis and visualization tools will be used. One such tool is the new and powerful volume render developed by one of the co-authors, David Porter. Another is the effective use of the high resolution PowerWall at LCSE and the DisplayWall at NCSA (National Center for Supercomputing Applications.

Supplementary URL: http://redrock.ncsa.uiuc.edu/AOS/home_pubs.html