Monday, 7 January 2019: 2:15 PM
North 125AB (Phoenix Convention Center - West and North Buildings)
Temporal and spatial resolution of rainfall data is crucial for environmental modeling studies in which its variability in space and time is considered as a primary factor. Rainfall products from different remote sensors (e.g., radar or satellite) provide different space-time resolutions because of the differences in their sensing principles. To complement relatively lower resolution products (e.g., satellite), we develop a framework that augments rainfall data with increased space and time resolutions using deep learning of neural networks. As graphical processing units (GPUs) became more computationally capable, methods that exploit deep neural networks gained momentum in providing intelligent models that can transform, create, and augment remote sensing data by utilizing supervised and unsupervised learning. This study aims to employ three neural network architectures, namely, Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs) and Gated Recurrent Unit (GRU) Networks to improve radar and satellite-based rainfall products. This paper presents novel approaches in extending and augmenting the rainfall data by increasing both temporal and spatial resolution with CNNs and GANs. It also presents a model to explore the capabilities of neural networks in quantitative rainfall forecasting by exploiting GRU networks.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner