The problem lies with the use of statistical hypothesis testing as a “rubber stamp of approval”. Weather modification experiments are very costly and can only collect (relatively) little information. Additionally, the physical effects of weather modification are not always well understood. Thus, in order to squeeze every last bit of information out this very expensive data, several types of analyses must be completed. However, if each of these analyses is considered a “test” of the efficacy of cloud seeding, then multiplicity issues rule the results. Data from cloud seeding experiments is highly variable, and this reduces the power of even a single test to detect differences. Dividing the allowable error among multiple tests makes detecting differences practically impossible. Conversely, to collect huge quantities of data and spend small fortunes to arrive at a single test statistic is both foolish and wasteful. The recommended solution is to use statistics as a tool for discovery, a mathematical magnifying glass; not as a rubber stamp. Specific strategies for balancing hypothesis testing, multiplicity issues, and exploratory analyses will be discussed in the preprint and presented at the conference.
Supplementary URL: