3.2 Testing Framework in JEDI

Tuesday, 14 January 2020: 10:45 AM
254B (Boston Convention and Exhibition Center)
Maryam Abdi-Oskouei, UCAR, Boulder, CO; and Y. Trémolet

Automated testing is one of the key components in the continuous integration and continuous delivery (CI/CD) pipeline. The testing framework in the Joint Effort for Data assimilation Integration (JEDI) project is designed to automatically run different sets of tests triggered by changes in the code repository. These changes can be new pull requests or new commits to an existing pull request. These tests will examine different parts of codes such as a successful build of the repository, a correct output from the model by comparing to a known output, and even the coding conventions. Passing all tests ensures the developers that the new feature is compatible with all the JEDI components and can be added to the repository. With the automated testing, any error or incompatibility in the new scripts can be caught at the early stages of the development and help to run the development pipeline more efficiently. Automated testing can help to make the review process shorter and to add the new features to the repository quicker.

This presentation summarizes the recent efforts and challenges in developing an automated testing framework for JEDI. We implemented a collection of modern software development tools to increase the portability of the testing system and the efficiency of code development. Travis-CI is installed on the JEDI GitHub repositories to keep track of changes to the repositories and to trigger different suites of tests. Docker containers are built on the Travis-CI server to create the environment needed to run JEDI. Having multiple Docker containers, each built using various software versions and packages enables us to run the tests on multiple environments to ensure the compatibility of JEDI codes across multiple platforms. CodeCov is used to create a report on the test coverage. The test coverage report highlights the sections of the code that are not fully tested so developers can focus on writing tests for these sections. AWS is implemented to run more expensive and thorough tests to assess the performance and scalability of the system.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner