TABLE 2 The false discovery rate when *P* < 0.05

This table tabulates the theoretical results of 1000 experiments where the prior probability that the null hypothesis is false is 10%, the sample size is large enough so that the power is 80%, and the significance level is the traditional 5%. In 100 of the experiments (10%), there really is an effect (the null hypothesis is false), and you will obtain a “statistically significant” result (*P* < 0.05) in 80 of these (because the power is 80%). In 900 experiments, the null hypothesis is true, but you will obtain a statistically significant result in 45 of them (because the significance threshold is 5%, and 5% of 900 is 45). In total, you will obtain 80 + 45 = 125 statistically significant results, but 45/125 = 36% of these will be false positive. The proportion of conclusions of “statistical significance” that are false discoveries or false positives depends on the context of the experiment, as expressed by the prior probability (here, 10%). If you do obtain a small *P* value and reject the null hypothesis, you will conclude that the values in the two groups were sampled from different distributions. As noted earlier, there may be a high chance that you made a false-positive conclusion due to random sampling. But even if the conclusion is “true” from a statistical point of view and not a false positive due to random sampling, the effect may have occurred for a reason different from the one you hypothesized. When thinking about *why* an effect occurred, ignore the statistical calculations, and instead think about blinding, randomization, positive controls, negative controls, calibration, biases, and other aspects of experimental design.