4A.7
An Automated Visibility Detection Algorithm Utilizing Camera Imagery
Robert G. Hallowell, MIT Lincoln Lab., Lexington, MA; and M. Matthews and P. A. Pisano
Clarus (which is Latin for clear) is an initiative to develop and demonstrate an integrated surface transportation weather observing, forecasting and data management system (www.clarusinitiative.org). As part of this effort, FHWA is also promoting research into methods for applying new sensor or probe data, such as vehicle infrastructure integration (VII), and innovative ways of using existing sensors such as camera imagery. MIT Lincoln Laboratory (MIT/LL) was tasked to evaluate the usefulness of camera imagery for sensing ambient and road weather conditions. Cameras have been used for decades to remotely monitor traffic and protect life and property. The deployment and utilization of cameras has expanded dramatically in the last decade with support from the Department of Transportation for traffic management and 511 services as part of the Intelligent Transportation System (ITS), and the Department of Homeland Security for threat surveillance and emergency management operations. Camera sensors are important for surface transportation applications because they are directly sensing the road/rail environment. However, manual observers are generally focused on their primary tasks of traffic management or security and ancillary information, such as weather or road conditions, are not routinely reported or archived. In addition, it is impractical to add special observers to report these road weather variables, hence the need for automated road weather detection algorithms. Previous research has shown that statistical edge analysis of camera imagery could be used to estimate the meteorological visibility of a region. As part of the Clarus research initiative, MIT/LL has further refined the experimental visibility algorithm and examined ways to generically extend the algorithm to the multitude of state DOT-owned traffic cameras now in operation. This paper discusses the methods used to develop the algorithm and includes test results from a tiered test site and several Utah and Alaska DOT traffic cameras.
The visibility algorithm examines the natural edges within the image (the horizon, tree lines, roadways, permanent buildings, etc) and performs a comparison of each image with an historical composite image. This comparison enables the system to determine the visibility in the direction of the sensor by detecting which edges are visible and which are not. The prototype system is tuned by comparing it with National Weather Service ASOS visibility measurements. A primary goal of the automated camera imagery feature extraction system is to ingest digital imagery with limited specific site information such as location, height, angle, and visual extent, thereby making the system easier for users to implement. A two-level (roof- and road-level) test camera suite was installed at MIT/LL for refining the existing algorithm under controlled conditions and to evaluate the algorithm's camera-height dependence. Once a tuned algorithm was generated, a set of state DOT traffic cameras from Utah and Alaska were first tuned to match the MIT/LL cameras. Finally, each DOT camera was generically tuned using the original MIT/LL camera tuning and a set of limited camera image characteristics (min/max range, zoom, etc). There are, of course, many challenges in providing a reliable automated estimate of the visibility under all conditions (camera blockage/movement, dirt/raindrops on lens, etc) and the system attempts to compensate for these situations. While ASOS-measured visibility was used as truth in tuning the system, there were some significant problems with this approach. The primary issues were related to tuning of high visibilities (ASOS reports all 10 mile or greater visibilities as 10 but the camera may detect much more distant targets) and spatial and temporal variability of the ASOS report as compared to the camera view angle.
The MIT/LL fixed test camera installed at roof-level yielded excellent results at estimating all ranges of visibility. However, the road-level camera had significantly poorer results on the same data set. The low-level camera was incapable of distinguishing between edges at different ranges as the attack angle of the camera tended to blur all the edges together. The extension of the algorithm methodology to additional existing DOT cameras in Utah and Alaska was successful, but some problems were discovered. Camera movement and changes in zoom factor adversely impact algorithm performance, in particular the development of an accurate historical composite. Generically applying the original test suite algorithm by adding camera characteristic variables was marginally successful. Data from more cameras needs to be studied in order to optimize the ability to perform this transformation. Overall, however, the ability to tune the algorithm on multiple cameras using a small subset of verification data indicates the general utility of the visibility algorithm.
Session 4A, Advances and Applications in Transportation Weather
Tuesday, 16 January 2007, 1:30 PM-5:30 PM, 216AB
Previous paper Next paper