12A.8 Use of User-Defined Metrics for Evaluating Solar Forecasts

Thursday, 2 July 2015: 9:45 AM
Salon A-2 (Hilton Chicago)
Tara L. Jensen, NCAR/Research Applications Laboratory, Boulder, CO; and S. E. Haupt, B. G. Brown, T. L. Fowler, J. H. Gotway, J. Prestopnik, J. K. Lazo, and S. D. Drobot

The National Center for Atmospheric Research (NCAR), together with partners from other national laboratories, universities, and industry are taking part in the US Department of Energy (DOE) Sunshot program by building, deploying, and assessing the SunCast solar power forecasting system. Through this project, a full suite of metrics were identified in a collaborative effort between the NCAR verification team lead by the NCAR/Research Applications Laboratory and the IBM verification team lead by National Renewable Energies Laboratory. The statistical suite include measures of accuracy, variability, ramp events, uncertainty and probability. Additionally, synthesis tools were adopted to allow the end users to better assess forecast skill. These metrics are being applied systematically to forecasts developed by the two teams. The metrics project began with taking into account the needs of the industry stakeholders and will culminate with assessing the forecast skill through traditional metrics used to assess weather forecasts (e.g. mean absolute error, root mean square error, false alarm ratio, probability of detects, etcÂ…) as well as the economic impact of improved solar forecasting. This presentation will focus on the traditional metrics with examples of what the metrics address and how to they were used to evaluate and diagnose forecast strengths and weaknesses during the development of the NCAR SunCast solar power forecasting system.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner