In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.
The model was tested on pulsed Doppler lidar measurements from sites in Oklahoma and Colorado and output from the model was compared to TI estimates from instruments on collocated met towers at the sites. The model performed well for the majority of atmospheric conditions, with the exception of strongly unstable periods, when variance contamination created large errors in the initial lidar TI estimates. As a result, different techniques for reducing variance contamination in lidar data were explored to determine the best way to address variance contamination in the model. Techniques include spectral fitting and the use of Taylor's frozen turbulence hypothesis to estimate the change in vertical velocity across the lidar scanning circle.