137 Cloudstream: Long-term Cloud Detection Image Prediction Model Using Multi-Channel Satellite Images

Monday, 29 January 2024
Hall E (The Baltimore Convention Center)
Eunbin Cho, SIA(SI analytics), Daejeon, South korea; SI-Analytics, Daejeon, South korea; and E. Kim and Y. Choi

Cloud cover is an important meteorological factor that affects various industries, such as solar power generation. Recent advances in deep learning have opened up the possibility of predicting weather phenomena using satellite imagery. However, previous studies have often faced limitations in computational resources, because of the large size of satellite images. Therefore, many studies have used low-quality satellite images that have been cropped or downscaled to very small sizes.

In this article, we propose an autoencoder-based cloud cover prediction model with high-quality images. We train and evaluate the model using cloud detection images and infrared channels from the Korean geostationary satellite GEO-KOMPSAT-2A (GK2A). The cloud detection images are classified into three classes (cloud, probably cloud, clear sky) for each pixel. These images are closely related to cloud cover. The infrared channels provide detailed information on clouds.

The computational cost of training a model increases with the size and number of input images. We propose Cloudstream for effectively predicting cloud cover using multi-channel satellite images. Cloudstream consists of an CNN-based autoencoder and an RNN-based time series prediction module. We use PredRNN-V2 as a time series prediction module. PredRNN-V2 is a deep learning model that suggests an effective spatiotemporal representation memorization method using additional memory. PredRNN-V2 use patch image to reduce the computational cost. In our previous study, we found that non-patch images are more useful for predicting satellite images using PredRNN-V2. Therefore, we experiment with non-patch images.

We focus on demonstrating the performance of Cloudstream through two experiments. In the first experiment, we compare the performance of Cloudstream with PredRNN-V2 on 128x128 pixel input images. This experiment evaluates the impact of the autoencoder in Cloudstream on prediction performance and computational complexity. The second experiment, we compare the performance of Cloudstream on 128x128 and 512x512 pixel input images. This experiment evaluates the impact of high-resolution satellite images on model performance.

In the first experiment, we find that Cloudstream achieves similar F1 scores to PredRNN-V2 (for non-patch input) on all classes of cloud detection images, and FLoating point OPerations (FLOPs) are reduced by more than three times. In the second experiment, we find that high-resolution input images generally have higher prediction performance than low-resolution input images.

This work is targeted to contribute to the prediction of solar power generation by predicting accurate future cloud cover

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner