186 Extraction of 3D Cloud Information Using Stereoscopic Photogrammetry

Thursday, 31 August 2017
Zurich DEFG (Swissotel Chicago)
Hong Jiang, Univ. of Oklahoma, Norman, OK; and B. L. Cheong and T. Y. Yu

Accurate cloud information extraction (ex, cloud height and range) based on millimeter radar solely is impossible in some scenarios due to its sensitivity to the radar reflectivity. For instance, radar has difficulty to distinguish small cloud droplets from raindrops, and radar has low reflectivity from ice clouds due to small particles, and also large particles appearance around cloud boundary will obscure the cloud boundary detection. Some other techniques are encouraged to conquer these situations. In this paper, 3D reconstruction from stereoscopic images is proposed as a synergistic technique, which can estimate cloud height and range in the scenarios described above. Two major limiting factors of the reconstruction performance are the accuracy of projection matrix and identifying pixel locations of the same object on the two images (i.e., corresponding points matching). In this paper, two novel approaches are discussed to address these two issues. First of all, the accuracy of the parameters in the projection matrix such as camera‚Äôs location and pointing angles, which are commonly measured by off-the-shelf digital protractor and GPS receiver, are not sufficient. A calibration procedure is required to achieve the desirable accuracy. However, at least six reference points (i.e., objects or landmarks with known locations) are typically needed, which can be challenging practically. A new calibration method is proposed without the need of these reference points based on a constraint of epipolar geometry. In this paper, simulated and real cloud images are investigated to quantify the effectiveness of the proposed calibration method. For the corresponding points matching, the issue is exacerbated for cloud application due to low contrast and non-rigid characteristics of clouds. In this work, a matching algorithm based on 12 textures derived from the color images has been developed. Those 12 textures are easy to generate and can capture the cloud boundary and consequently benefit the corresponding points matching along the cloud boundary. Comparisons between the developed matching algorithm and other three popular matching algorithms, i.e. epipolar geometry based, SIFT based, and non-parametric local transform based methods, are performed on both simulated and real cloud images. It will be demonstrated and verified that the newly proposed matching algorithm has the best overall performance comparing with the other three for cloud feature matching. Reconstructions from multiple sets of simulated and real cloud images will also be presented to demonstrate the feasibility of the proposed techniques.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner