148 Surveillance Camera Based Extreme Rainfall Observation Using Deep Learning Model

Thursday, 9 May 2024
Regency Ballroom (Hyatt Regency Long Beach)
xing wang, Nanjing university, nanjing, 32, China; and J. Zhao

The increased frequency of extreme rainfall events (EREs) produced by typhoons has become an indisputable fact in recent years. The current rainfall observation systems, however, have difficulty producing rainfall data with relatively high resolutions to meet the needs of typhoon rainfall observation tasks. (i) Rain gauges are the fundamental means of measuring precipitation at the ground level. However, rain gauges are usually spatially sparse due to the high expense of a dense rain gauge network; thus, they cannot adequately capture the spatial variability in precipitation, especially over complicated topography (e.g., mountainous areas and urban areas); (ii) Based on remote sensing (RS) techniques, weather radar and satellites can obtain data with relatively high spatial and temporal resolution, and these data are widely used for rainfall estimation purposes. Nevertheless, such measurements suffer from the intrinsic weakness of the principle in the rainfall estimation applications—i.e., finding the relationship between observable variables from space (e.g., cloud top temperature and the presence of frozen particles aloft) and rainfall information. Hence, RS-based rainfall estimation must be calibrated and validated with ground-level precipitation measurements. Moreover, recent studies have shown that this method is difficult to apply to real-time urban flood forecasting; (iii) Coupling RS (e.g., satellite and radar) measurements with ground-level rain gauge observations partially allows to bridge the gap between the discrete description of rainfall provided by the rain gauge network and the real spatial dynamics related to the precipitation fronts. The combination of the various signals, however, requires working at the interface between multi-source heterogeneous data, which is complicated particularly when short-duration rainfall is considered.

Therefore, developing a high-resolution, low-cost, ground-level rainfall monitoring network has become an important research. The widespread use of surveillance cameras has become an emerging means of rainfall observation. With the advantages of high spatial-temporal resolution, rainfall information obtained from surveillance video and/or audio is highly suitable for meteorological-related research and has bright prospects. For surveillance video-based rain gauges, in the EREs, the visibility of rain particles is low due to the mutual occlusion of raindrops, leading to the problem of ERE underestimation. In contrast, generated by the collision of rain particles with other objects in the process of falling, rainfall audio, whose features (such as amplitude and frequency) are important indicators of the size and density of raindrops, makes it possible to derive the rainfall intensity (RI) based on rainfall audio data. Moreover, the audio data is not disturbed by the visibility level of raindrops, which compensates for the shortcoming of the surveillance video-based gauge's insufficient performance in ERE. However, EREs usually visually mask or significantly degrade other sound events, whose distinctive acoustic features make it much easier to judge rainfall events than light or moderate rainfall. However, fine-grained ERE quantitation by the surveillance audio data has not been well addressed.

In this study, from the viewpoint of surveillance sound space, a 3D printer was used to create a shelter for the surveillance camera to define the underlying surface of falling raindrops artificially. Combining the knowledge of meteorology, micro-physics, and acoustics of rainfall, the shelter structure was designed to standardize the acoustical behavior while enhancing the consistency and specificity of raindrop sound, especially in complex scenarios such as those disturbed by different levels of wind. After that, convolutional neural network-based deep learning algorithms were used to classify ERE levels, and an audio-based ERE classification system was built. The experimental results show that the shelter facilitates audio-based rainfall representation; moreover, with the help of shelter, our proposed system achieved performance with about 93.4% accuracy in complex rainfall scenarios. Our study supports high-resolution rainfall data production on existing surveillance resources, developing a novel and reliable alternative for the perception of ERE and the calibration of observations from current rainfall networks.

Keywords: Tropical rainfall; Rainfall estimation; Surveillance camera; Deep Learning

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner