Linfred Kingston1, Fangfang Yu2, and Xiangqian Wu3
1: Department of Computer Science, University of Maryland, College Park, MD 20742
2: ESSIC/CISESS, University of Maryland, College Park, MD 20740
3: NOAA/NESDIS/STAR, College Park, MD, 20740
Abstract
The Moon, due to its extreme long-term surface stability, serves as a stable reference target for the satellite instrument calibration and inter-calibration. To overcome the heterogenous reflectance of the illuminated surface, accurate image registration is essential to extract the radiance from the lunar surface areas under varying viewing geometry. In this study, we explore the Convolutional Neural Network (CNN) method and compare it with the Scaled-Invariance Feature Transform (SIFT) algorithm, a widely used computer vision method for extracting distinctive and invariant features. The goal is to register the lunar images collected by NOAA Geostationary Operational Environmental Satellite (GOES)-R Advanced Baseline Imager (ABI) instruments. Two ABI lunar images captured at phase angle of about 25o and 60o are used to evaluate the image registration against a lunar image taken at a near 5o phase angle. Our findings reveal that the Random Sample Consensus (RANSAC) method is able to effectively remove most of the mis-matched key points used in image registration. Compared to the SIFT, CNN identifies tenfold more matched key points across the illuminated lunar surface for the two distinct phase angle images. Our ongoing assessment aims to enhance image registration accuracy through the matched key points. The results will be presented in the coming meeting.

