The visibility algorithm examines the natural edges within the image (the horizon, tree lines, roadways, permanent buildings, etc) and performs a comparison of each image with an historical composite image. This comparison enables the system to determine the visibility in the direction of the sensor by detecting which edges are visible and which are not. The prototype system is tuned by comparing it with National Weather Service ASOS visibility measurements. A primary goal of the automated camera imagery feature extraction system is to ingest digital imagery with limited specific site information such as location, height, angle, and visual extent, thereby making the system easier for users to implement. A two-level (roof- and road-level) test camera suite was installed at MIT/LL for refining the existing algorithm under controlled conditions and to evaluate the algorithm's camera-height dependence. Once a tuned algorithm was generated, a set of state DOT traffic cameras from Utah and Alaska were first tuned to match the MIT/LL cameras. Finally, each DOT camera was generically tuned using the original MIT/LL camera tuning and a set of limited camera image characteristics (min/max range, zoom, etc). There are, of course, many challenges in providing a reliable automated estimate of the visibility under all conditions (camera blockage/movement, dirt/raindrops on lens, etc) and the system attempts to compensate for these situations. While ASOS-measured visibility was used as truth in tuning the system, there were some significant problems with this approach. The primary issues were related to tuning of high visibilities (ASOS reports all 10 mile or greater visibilities as 10 but the camera may detect much more distant targets) and spatial and temporal variability of the ASOS report as compared to the camera view angle.
The MIT/LL fixed test camera installed at roof-level yielded excellent results at estimating all ranges of visibility. However, the road-level camera had significantly poorer results on the same data set. The low-level camera was incapable of distinguishing between edges at different ranges as the attack angle of the camera tended to blur all the edges together. The extension of the algorithm methodology to additional existing DOT cameras in Utah and Alaska was successful, but some problems were discovered. Camera movement and changes in zoom factor adversely impact algorithm performance, in particular the development of an accurate historical composite. Generically applying the original test suite algorithm by adding camera characteristic variables was marginally successful. Data from more cameras needs to be studied in order to optimize the ability to perform this transformation. Overall, however, the ability to tune the algorithm on multiple cameras using a small subset of verification data indicates the general utility of the visibility algorithm.