Wednesday, 15 January 2020
Hall B (Boston Convention and Exhibition Center)
Data sonification is the process of converting numerical data into sound. This method of data interpretation is being utilized by data scientists and musicians alike to interpret a vast array of events, from climate change to the stock market (Chabot and Braasch, 2017), to water systems monitoring (Lenzi et al., 2019) and salmon movements (Hegg et al., 2018). But what are the benefits of this unique way of data interpretation? What place does data sonification hold within the sciences, and specifically meteorology? Work has already been done to use sonification for aiding visually impaired scientists (Diaz Merced, 2013), as well as to communicate the real dangers of climate change more effectively to the public (Twedt, 2018). However, research on data sonification and its applications in science is still limited, and most studies have been conducted only within the past few years.
Data sonification should be embraced as a means to connect more of the world with the intrigue of atmospheric science and its mysteries. It provides a unique perspective to the data that can’t be achieved through graphs and figures and can create a more lasting impression within the mind compared to visual representations of data. For instance, a teacher introducing new topics (such as turbulent versus laminar flow; how variables of wind, pressure, temperature change together throughout the life cycle of a tropical cyclone; the increase in global temperature) in a meteorology class can present basic concepts through sonification to initially engage the class and stir curiosity, setting the stage for more open involvement as the course continues. This engagement is also useful in connecting with the general public, whose understanding of scientific concepts is limited, making it harder for the public to trust or find interest in the science being presented. Interpretation of data through sound also opens the door for visually impaired students of atmospheric science to continue working in the field without barriers.
This study utilizes Python-based code and the music typesetter program LilyPond to transform meteorological datasets into music. The code is developed in such a way as to go beyond simply taking data point-for-point and assigning to a random note on a musical scale. Rather, it provides options for smoothing signals, excluding outliers, and amplifying important trends, as decided using informed scientific understanding and not simply random chance. My research focuses on sharing these “musical datasets” with the public on a regular basis, with the goal of encouraging interest and involvement in science, as well as to bring awareness to the need for continued improvements in resources available to visually impaired scientists, welcoming as many people as possible to the study of the atmosphere.
Data sonification should be embraced as a means to connect more of the world with the intrigue of atmospheric science and its mysteries. It provides a unique perspective to the data that can’t be achieved through graphs and figures and can create a more lasting impression within the mind compared to visual representations of data. For instance, a teacher introducing new topics (such as turbulent versus laminar flow; how variables of wind, pressure, temperature change together throughout the life cycle of a tropical cyclone; the increase in global temperature) in a meteorology class can present basic concepts through sonification to initially engage the class and stir curiosity, setting the stage for more open involvement as the course continues. This engagement is also useful in connecting with the general public, whose understanding of scientific concepts is limited, making it harder for the public to trust or find interest in the science being presented. Interpretation of data through sound also opens the door for visually impaired students of atmospheric science to continue working in the field without barriers.
This study utilizes Python-based code and the music typesetter program LilyPond to transform meteorological datasets into music. The code is developed in such a way as to go beyond simply taking data point-for-point and assigning to a random note on a musical scale. Rather, it provides options for smoothing signals, excluding outliers, and amplifying important trends, as decided using informed scientific understanding and not simply random chance. My research focuses on sharing these “musical datasets” with the public on a regular basis, with the goal of encouraging interest and involvement in science, as well as to bring awareness to the need for continued improvements in resources available to visually impaired scientists, welcoming as many people as possible to the study of the atmosphere.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner