Software Engineering for Computational Science: Observations from Experience
First, in collaboration with other researchers, I have conducted a series of retrospective case studies of computational science projects as part of the DARPA High Productivity Computer Systems (HPCS) project, including one case study of a weather forecasting project. The main goal of these studies was to understand the software development process followed in these projects. We identified a number of difficulties faced by computational science developers. We also documented how SE principles were and were not being applied in the subject projects. These case studies led to the identification of a set of lessons learned about the development of computational science software that are important to consider moving forward. This presentation will briefly discuss some of the key lessons learned from these studies. These lessons include: “The difficulty of validation and verification” and “Agile vs. traditional development models”.
Second, I have been the primary organizer, along with different sets of colleagues, for a workshop series on Software Engineering for Computational Science & Engineering. This workshop series, which has been conducted under slight variations of the above title depending on the venue, brings together software engineers and computational scientists to discuss issues of interest to both groups. Over the years, researchers have presented papers on a number of topics which have led to important and fruitful discussions among the workshop attendees. The results of these discussions suggest some interesting future trends and indicate ongoing research needs. The presentation will highlight some of the most important and interesting discussion topics emerging from the workshop series. These topics include: “Unique aspects of research software”, “Communication issues”, “Measuring scientific productivity”, and “Scientific software quality goals”.
Third, to begin gathering more direct insight and providing more direct impact to computational science projects, we have begun ongoing relationships with computational science project teams. The goal of these interactions is to allow us to understand the specific software development issues faced by the team, to develop and deploy a proposed solution (e.g. an SE tool or practice), and monitor the effectiveness of the new approach. The primary benefit of these types of results is that they provide real, concrete examples of the use of SE tools and practices in the development of computational software. To make these types of collaborations successful, both parties must commit to a longer-term interaction. Thus far, we have successfully formed three such collaborations. These collaborations employed the following SE practices in computational science projects: Test-Driven Development, Peer Code Review, and Design Patterns. The presentation will briefly describes each collaboration along with the lessons learned that may be of interest to other teams.
As these findings have been drawn from a number of scientific domains, including a weather code, the results should be of interest and value to scientists in the Weather, Water and Climate domains. As part of this presentation I seek not only to describe findings from previous studies, but also to begin a discussion within the community of potential solutions. I will lay out a road map for future collaborations with various code development teams that will be mutually beneficial.