12B.2
Software Engineering for Computational Science: Observations from Experience

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Thursday, 8 January 2015: 8:45 AM
130 (Phoenix Convention Center - West and North Buildings)
Jeffrey C. Carver, University of Alabama, Tuscaloosa, AL

The increasing impact of computational modeling on the study of Weather, Water and Climate, motivates the need to ensure that the output of such software models is correct. It is also important for scientists to be able to use software modeling productively (i.e. develop and maintain this software with as little effort as possible). The discipline of software engineering (SE) strives to develop and evaluate tools and practices to enable developers to accomplish these goals. Because of the various unique aspects of developing computational science software, many existing SE tools and practices, developed initially for the Business/IT community, may not be effective or efficient without tailoring. Appropriate SE solutions must account for the salient characteristics of the computational science development environment. To identify these solutions, members of the SE community must collaborate with members of the computational science community. After providing some background by discussing the results of two community surveys, this presentation will discuss the results from three types of these collaborations.

First, in collaboration with other researchers, I have conducted a series of retrospective case studies of computational science projects as part of the DARPA High Productivity Computer Systems (HPCS) project, including one case study of a weather forecasting project. The main goal of these studies was to understand the software development process followed in these projects. We identified a number of difficulties faced by computational science developers. We also documented how SE principles were and were not being applied in the subject projects. These case studies led to the identification of a set of lessons learned about the development of computational science software that are important to consider moving forward. This presentation will briefly discuss some of the key lessons learned from these studies. These lessons include: “The difficulty of validation and verification” and “Agile vs. traditional development models”.

Second, I have been the primary organizer, along with different sets of colleagues, for a workshop series on Software Engineering for Computational Science & Engineering. This workshop series, which has been conducted under slight variations of the above title depending on the venue, brings together software engineers and computational scientists to discuss issues of interest to both groups. Over the years, researchers have presented papers on a number of topics which have led to important and fruitful discussions among the workshop attendees. The results of these discussions suggest some interesting future trends and indicate ongoing research needs. The presentation will highlight some of the most important and interesting discussion topics emerging from the workshop series. These topics include: “Unique aspects of research software”, “Communication issues”, “Measuring scientific productivity”, and “Scientific software quality goals”.

Third, to begin gathering more direct insight and providing more direct impact to computational science projects, we have begun ongoing relationships with computational science project teams. The goal of these interactions is to allow us to understand the specific software development issues faced by the team, to develop and deploy a proposed solution (e.g. an SE tool or practice), and monitor the effectiveness of the new approach. The primary benefit of these types of results is that they provide real, concrete examples of the use of SE tools and practices in the development of computational software. To make these types of collaborations successful, both parties must commit to a longer-term interaction. Thus far, we have successfully formed three such collaborations. These collaborations employed the following SE practices in computational science projects: Test-Driven Development, Peer Code Review, and Design Patterns. The presentation will briefly describes each collaboration along with the lessons learned that may be of interest to other teams.

As these findings have been drawn from a number of scientific domains, including a weather code, the results should be of interest and value to scientists in the Weather, Water and Climate domains. As part of this presentation I seek not only to describe findings from previous studies, but also to begin a discussion within the community of potential solutions. I will lay out a road map for future collaborations with various code development teams that will be mutually beneficial.