2B.5 Using Deep Learning for Advanced Remote Sensing of Clouds

Monday, 7 January 2019: 11:45 AM
North 125AB (Phoenix Convention Center - West and North Buildings)
Alexandria M. Russell, Northrop Grumann Corporation, McLean, VA; and M. Mason, B. D. Felton, and R. J. Alliss

Real-time identification of clouds from satellite and/or in-situ instruments is critical to many weather analysis and forecasting applications. In particular, free space optical communication systems rely heavily on the accurate identification and prediction of clouds in order to plan operations and avoid the loss of data. However, the current methods for cloud detection are not only expensive to develop and maintain, but they do not always to perform well on new data given their reliance on empirical thresholding techniques. Artificial neural networks, on the other hand, have been shown to produce accurate and robust results when applied to object detection and segmentation problems. In this study, we apply the Mask RCNN – a region-based convolutional neural network with instance segmentation capability – to the problem of identifying clouds from ground-based whole sky infrared imagery. The Mask RCNN model was chosen due to its ability to identify multiple distinct objects from the background (object detections and localizations) and to determine each object’s spatial extent (masks). The MRCNN is a lighter model in terms of its memory footprint compared to a fully convolutional sematic segmentation neural network. It is also more appropriate for problems where there are only a handful of object types (classes) and where there is not a lot of training data. For this work, the model was trained to identify clouds from background (only 2 classes) using several hundred Infrared Cloud Imagery (ICI) samples and corresponding truth masks. Sensitivity tests that vary the hyperparameters of the Mask RCNN were also performed. Results indicate that the model produces qualitatively realistic cloud masks, and even out-performs the truth dataset at times. The best network achieved a mean Average Precision (mAP) of 0.88 and a Mean Intersection over Union (MIoU) of 0.80 on independent validation data. The neural network was trained on a compute cluster equipped with three 16GB Nvidia P100 GPUs. This presentation will describe the data preparation pipeline, training methodology, and resulting prediction accuracy of the model.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner