641 User-Centric Metrics for Evaluating Solar Forecasts

Wednesday, 13 January 2016
Tara L. Jensen, NCAR/RAL, Boulder, CO; and T. Fowler, B. G. Brown, S. E. Haupt, B. Kosovic, and J. K. Lazo

The National Center for Atmospheric Research (NCAR), together with partners from other national laboratories, universities, and industry are taking part in the US Department of Energy (DOE) Sunshot program by building, deploying, and assessing the SunCast solar power forecasting system. Through this project, a full suite of metrics were identified in a collaborative effort between the NCAR verification team lead by the NCAR/Research Applications Laboratory and the IBM verification team lead by National Renewable Energies Laboratory. The statistical suite include measures of accuracy, variability, ramp events, uncertainty and probability. Additionally, synthesis tools were adopted to allow the end users to better assess forecast skill. These metrics are being applied systematically to forecasts developed by the two teams. The metrics project began with taking into account the needs of the industry stakeholders and will culminate with assessing the forecast skill through traditional metrics used to assess weather forecasts (e.g. mean absolute error, root mean square error, false alarm ratio, probability of detects, etc…) as well as the economic impact of improved solar forecasting. This presentation will focus on using and interpreting the user-centric metrics of this project.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner