Environmental Monitoring

Environmental Monitoring
   



Green (1979, p25-64) outlined "Ten Principles" which remain the essence of good design. These principles have been reformulated and extended in Sit and Taylor (ed.s) Statistics for Adaptive Management (Chapter 3) and have been reproduced below.

A good design has a number of elements:

  1. Formulate a clear, concise hypothesis. This sounds very "scientific" but if you cannot explain clearly what it is you are sampling for it is unlikely that you will achieve anything. Formulating the hypothesis, the question you want to answer, is an exercise in the discipline of clear thinking. How you frame the "problem" will limit the options you develop. Modeling the problem is a useful tool to assist in understanding the system(s) of interest and how your question translates into consequences. (See Thinking about the problem).
  2. Ensure that controls will be present. In general, without controls, no empirical data are available to refute the argument that observed changes might have occurred by chance alone, regardless of impact or your interventions.
  3. Stratify in time and space to reduce heterogeneity. If the area to be sampled has a large-scale environmental pattern, break the area up into relatively homogeneous sub-areas and allocate samples to each in proportion to the size of the sub-area. If it is an estimate of total abundance over the entire area that is desired, make the allocation proportional to the number of organisms in the sub-area. (Green, p35ff) Stratification into sub-areas can reduce "within" variability from a purely random survey that would include all types of sub-area in one block, thereby increasing the likelihood of detecting "between" variation (control vs. impacted sites). All sub-areas must be adequately represented by controls and impacted sites.Stratification can be generalised to include issues of time as well as space. Systems can behave differently at different times of the day, different seasons, and during events. Relying on randomisation to give you the representative samples you need is inefficient.
  4. Take replicate samples with each combination of time, space, or any other controlled variable. The idea of replicates is that they are independent samples that allow you to gain a good idea of the amount of "noise" in your data. If you don't understand this you can't say much about the differences between sites and/or treatments.
  5. Determine the size of a biologically meaningful, substantive difference that is of interest. You need to decide in advance what kind or size of result will be important to you. This decision is important in the design of your experiments, not just in how you interpret them. Your study should be ruled by the biology of the system not the statistical analysis. Statistics are a very useful tool but cannot replace good thinking, good design and a real question needing a real answer.
  6. Design the experiment to obtain adequate type I and type II errors to detect substantive differences or to ensure sufficient precision of the estimates. "Power" is the measure of your ability to detect false negatives, ie how often you are likely to say there is no effect when there is one. Ensuring adequate "power" is seen as the "precautionary approach" so you won't let an environmentally damaging impact keep occurring.
  7. Allocate replicate samples using probabilistic method in time and space. Your replicates should be independent of each other and the best way to prevent bias is to use a randomised process. The "...independence of errors is the only assumption whose violation it is impossible to cure after the data have been collected, and truly random sampling will prevent that violation ... correlated errors can have more serious consequences on the validity of tests of significance than all other violations." (Green, 1979, p28).
  8. Pretest the sampling design and sampling methods. The importance of pretesting is probably the most underemphasised principle related to field studies. The argument for preliminary sampling is simply that there is no other way to check out things that can potentially be serious problems in an environmental study. To be confident of your program outcomes you need to be confident about the design and methods and if you don't have previous experience of these you should always do a preliminary study. It is often the best way to save time and money.
  9. Maintain quality assurance throughout the survey. There is no point doing the perfect design if you don't know how well it is implemented and how well your methods reflect the broader environmental characteristics. Hurlbert (1984) states that "... it is clear that experimental design and experimental execution bear equal responsibility for the validity and sensitivity of an experiment. Yet in a practical sense, execution is a more critical aspect of experimentation than is design. Errors in experimental execution can and usually do intrude at more points in an experiment, come in a greater number of forms, and are often subtler than design errors. Consequently, execution errors generally are more difficult to detect than design errors both for the experimenter himself and for readers of his reports. It is the insidious effects of such undetected or undetectable errors that make experimental execution so critical." Quality assurance is the only way to guard against these errors and a quality assurance plan should be considered, if only to think through the sources and kinds of errors and the consequences for interpretation.
  10. Check the assumptions of any statistical analysis. Some people think this isn't that important because many statistical tests are robust to the failure of assumptions but it is still a good discipline to think about your experiment, what statistical methods are valid for that design, and the characteristics of the link between the expected outcome and the test results; all these can help you optimize the design of the program and can improve the confidence in the outcomes. It is important also not to divorce the analysis from the problem. As Stewart-Oaten et al. (1986) note, "...all statistical procedures require assumptions, and these assumptions must be justified by reference both to the data (by plots and formal tests) and to a prior knowledge of the physical and biological system generating the observations."
  11. Use the "Inter-Ocular Trauma test". Present the data so that "the results will hit you between the eyes"! If your results don't have impact then they are unlikely to be taken seriously by most (non-technical) readers. This is the "punch line" of the your story . Don't confuse with unnecessary subplots and make sure you get a "conviction".

   
 

Hosted by Keysoft Pty Ltd