The "error" referred to here is how often you wrongly reject a correct hypothesis (Type I error) or do not reject a false hypothesis (Type II error).

Traditionally the focus has been on Type I errors (), i.e. rejecting the "null" hypothesis when it is true. In this situation the results could have occurred due to chance alone but you interpreted them as due to systematic effects (and get egg on your face). In the history of science it is better to be cautious because ongoing investigations will reveal such errors. More recently ecologists and managers have given greater weight to Type II errors, i.e. where there is a real environmental impact that you miss by ascribing the effect to chance (the "null" is false but you do not reject it). In this situation, if the result is accepted, often there are no further studies and the impact continues to occur until it becomes obvious to an observer by which time much greater damage has been done.

The corollary of a Type II error () is "power" (1-), your confidence that such an error can be avoided. The power of your test generally depends on four things:

sample size;

the effect size you want to be able to detect;

the Type I error rate () you specify; and

the variability of the sample.

Based on these parameters, you can calculate the power of your experiment. Or, as is most commonly done, you can specify the power you desire (e.g. .80), the level, and the minimum effect size which you would consider "interesting", and use the power equation to determine the proper sample size for your experiment.

It is interesting that the simplest way to improve is to increase , but in practice this seems rarely to be done. Logically, if the focus is on the precautionary principle, trading off the risk of a Type I error with a Type II error is quite rational. Particularly as a Type II error would lead to more work by the proponents to gain approval whereas the Type I error would be the end of it as public authorities rarely have the resources to pursue such matters further.