Thursday, 1 February 2024: 5:00 PM
338 (The Baltimore Convention Center)
Atmospheric visibility is an important and complex meteorological variable that directly affects safe and reliable transportation. Specifically, declining visibility can pose an increased risk to automotive, aviation, and maritime traffic and operations. Traditional visibility sensors, e.g., those of the Automated Surface Observing Systems (ASOS) network, are costly and designed for air traffic use, thus these visibility sensor networks have limited coverage state-wide. In contrast, camera footage is highly available, accessible, and fairly inexpensive. One invaluable source of quality images is from the cameras mounted on New York State Mesonet (NYSM) towers. While it is possible to construct a model that detects a visibility measure for a single camera or location, this type of model is not generalizable to new locations with varying physical features or different fields of view. We propose a comparative visibility model that is a generalizable solution to new locations. We train a convolutional neural network (CNN) to compare the relative visibility of a query image to that of a reference image, both derived from the same camera. The resulting model can then be used to compare images from cameras that were not included in the training set, including those that have different maximum visibility distances, fields of view, and physical characteristics. Furthermore, by comparing a query image with a set of reference images with known visibility distances from the same camera, we can estimate the query image's underlying visibility distance. Results from a large, combined NYSM/ASOS data set show that the models learned using the proposed method are able to generalize to new locations. The approach is successful in both the comparative task and the numerical visibility prediction task for images derived from previously unseen image sources.

