P1.5
Simple Statistics For Science Fair Weather Projects
William P. Roeder, 45th Weather Squadron, Patrick AFB, FL; and D. E. Harms
Science fair projects can be improved through better application of statistics. After more than 20 years of judging science fairs, the same statistical shortfalls continue to be observed. The most frequent statistical shortfall is the near total absence of any statistics. This is surprising since the competitors and their teachers know that statistics will be a part of the judging criteria. This is likely due to the lack of statistical instruction in America’s secondary education. The second major shortfall is weak application of statistics when they are done.
A simple statistical process can be applied to virtually all science fair projects. But the student should be taught the basics of statistics before following this process. Otherwise it’s just a rote checklist and the student hasn’t learned anything. The student must first understand natural variation of measurements and statistically insignificant differences. The first step in the general process is to ensure an adequate sample size. Normally a sample of at least 25-30 independent events is recommended for representative statistics. But in science fairs, samples of 5 or less are typical. The second step is graphing the raw data for visual inspection. While graphing is done in many science fair projects, it is far from universal. When graphing is done, often the best type of graph is not used. Visual inspection of the graph can help identify outliers that should be eliminated from subsequent analysis, and identify patterns in the data, which can guide selection of future statistical tests. Plotting the standard deviation of the data can be very helpful, but this is virtually never done. The third step is to calculate and graph the averages and standard deviation of the averages. The fourth step is the final statistical testing, such as hypothesis testing, or regression analysis, or performance evaluation, depending on the experiment goals. Weather science fair projects often evaluate the performance of weather forecasts. A 2 x 2 contingency table is used to evaluate the performance of binary yes/no forecasts. Even this simplest of all possible forecasts requires three independent metrics, typically: Probability Of Detection (POD), False Alarm Rate (FAR), and Critical Success Index (CSI). POD and FAR are good metrics, but Heidke Skill Score or Kuiper Skill Score are superior to CSI. In science fair evaluation of weather forecasts, even these most basic of verifications usually aren’t done. None of these above procedures are so complicated that a computer with statistical software is mandatory.
A wealth of other more advanced statistic testing is obviously available. But application of even just this basic four-step procedure will hugely improve the quality of science fair projects. Indeed, this process would likely convert middle-placed projects into winners, and winners into higher-level competition winners. Integrating statistics into secondary school science courses is a better long-term fix.
Poster Session 1, University Outreach Activities and K-12 Educational Initatives
Sunday, 13 January 2002, 4:00 PM-4:00 PM
Previous paper Next paper