Controlling the False Discovery Rate (FDR) has many favorable attributes, including only modest sensitivity to spatial autocorrelation in the underlying data. Perhaps the greatest advantage of the FDR approach is that, by design, a control limit is placed on the fraction of significant gridpoint test results that are spurious, which greatly enhances the interpretability of the spatial patterns of significant results. Because the FDR approach is not only effective, but is also easy and computationally fast, it should be adopted whenever the results of simultaneous multiple hypothesis tests are reported or interpreted in the literature. Its main computational demand is only that the individual gridpoint p-values be sorted and examined. The usual strong spatial correlation encountered in gridded atmospheric data can be accommodated. The consequence of employing this statistically principled procedure — in stark contract to the all-too-common naive approach — is that there is much reduced scope for overstatement and over-interpretation of the results. In particular the analyst is not tempted to construct possibly fanciful rationalizations for the many spurious local test rejections, which may appear to be physically coherent structures because of the strong spatial autocorrelation, that competing methods produce.