Environmental Monitoring

Environmental Monitoring
   



"Managing stormwater is a complex business, often requiring outfalls to allow stormwater - with its associated contaminants, rubbish, oil from roads, etc. - to be released into streams and estuaries. So, there is a great concern about possible environmental problems. Such concerns are supposedly dealt with by regulations, controls by State and Federal agencies and, sadly, requirements for 'monitoring'.

"The conservatism of regulators, and the industry in general, has resulted in most resources being committed to Best Management Practices (BMPs) without clear evidence of the benefits with little commitment to understanding other strategies or the underlying ecological processes.

"Most monitoring programs are very poorly thought-out and their technical components are inadequate. As a result, the information gathered is not of good quality and not relevant to most informed decision-making." (Underwood, 1999)

Stormwater management has many components; a "cook book" approach will be as wasteful as the current "monitoring". This project has adopted the strategy of providing essential principles which are relevant to all programs, regardless of parameters or who is undertaking the program, backed up by illustrative case studies to demonstrate how principles can be applied in particular situations.

Monitoring for management
Management is integral to good monitoring: you need clear purposes and intentions for monitoring to be effective. Without a clear understanding of specific needs and what is important you are rudderless. Linking management explicitly is standard for business decision-making and managing the environment (to the extent that is possible) should be just as explicit. Standard tools such as decisions trees can be useful to clarify your thinking about why you are doing one thing and not another. Businesses might look at alternatives in terms of financial outcomes and environment managers can consider alternatives in terms of environmental outcomes (and the risks involved).

Precautionary Principle
The precautionary principle requires that environmental managers should be careful about making decisions where there is ignorance about the underlying issues. If a mistake is going to be made, it should be "in favour" of long-term environmental welfare and clarifying alternatives and expected outcomes ensures you think about these issues up front.

Statistical analyses of variable ecological data require attention to two types of mistake. The first (called Type I) is concluding there is in an impact when, in fact, there isn't one. The second (or Type II) is the conclusion that there is no impact when there is one. Either can happen because samples in the affected habitat are not perfect measures of what is really happening. So, precautionary principles require that Type II errors should be prevented, because it is not cautious to miss real impacts.

Association between two variables is not evidence for causality

Develop conceptual models to assist in understanding processes

Involve stakeholders, especially the community

Spend as much time as is necessary to define the problem as a testable hypothesis

Propositions/hypotheses can only be disproved
You must never begin an experiment if you do not know, in advance, how you will analyse the data.

Pollution occurs when there is loss of values (ecological, social, economic etc.), ie not all contaminants are important.

Impacts "press" or "pulse"? You need to understand the nature of the impact on the system and how the system responds.



Goal-Oriented Monitoring and Indicators

Purpose

  1. Purposes and expectations--Identify general purposes and expectations for the monitoring program.
  2. Specific program purposes--To the degree possible, identify the specific purposes of the monitoring program.
  3. Share purposes--Determine if other data collectors and users have similar purposes that may influence other monitoring programs.
  4. Stakeholders--Who needs the data or information and for what reason? Determine if other agencies share the same purposes and if they can effectively combine resources.
  5. Boundaries and timeframes--Identify general geographic boundaries and timeframes to the program
  6. Environmental indicators--Choose environmental indicators to measure the achievement of identified program purposes.


Design programs to achieve particular purposes:
There are different kinds of designs and you need to take into account the limitations of the different possibilities.






































  1. Existing environmental setting--Identify and describe the existing environmental setting, including its hydrology (surface and ground waters), biota, and resource use.
  2. Existing water-quality problems--Evaluate existing information to depict the known or suspected surface- and ground-water-quality conditions, problems, or information gaps; provide a current conceptual understanding; and identify management concerns and alternatives.
  3. Environmental indicators and data parameters--Determine the environmental indicators and habitat and related chemical, physical, biological, and ancillary data parameters to be monitored.
  4. Reference conditions--Establish reference conditions for environmental indicators that can be monitored to provide a baseline water-quality assessment.
  5. Data-quality objectives--Define the level of confidence needed, based on the data collected, to support testing management alternatives.
  6. Data-set characteristics--Determine the basis for a monitoring design that will allow successful interpretation of the data at a resolution that meets project purposes. The basis for monitoring should include statistical reliability and geographic, geohydrologic, geochemical, biological, land use/land cover, and temporal variability.
  7. Monitoring design--Develop a sampling design that could include fixed station, synoptic, event sampling, and intensive surveys; location of sites, such as a stratified random design; and physical, chemical, biological, and ancillary indicators
  8. Data-collection methods.--Develop sampling plans and identify standardized protocols and methods (performance based if possible) and document data to enable data comparison with other monitoring programs. Identify personnel and equipment needed
  9. Timing--Describe the duration of the sampling program and the frequency and seasonality of sampling.
  10. Field and laboratory analytical support--Identify field and laboratory protocols or performance- based methods, which include detection level, accuracy, precision, turnaround time, and sample preservation
  11. Data management--Describe the data-management protocol, which includes data archiving, data sharing, and data security that can be followed. Ensure that all data includes metadata, such as location (latitude and longitude), date, time, and a description of collection and analytical methods, and QA data
  12. Interpretation--Identify interpretative methods that are compatible with data being collected and program purposes
  13. Communications--Determine how data and interpretive information can be communicated; for example, press releases, public meetings, agency meetings, conferences, popular publications, agency reports, journal articles, and so forth
  14. Iterative--Develop feedback mechanisms to fine-tune design.



Ecological systems are very variable and differences are not always important

Spatial replication of sampling sites

Temporal replication of sampling sites

Flexible and Comprehensive Monitoring

Types of Design
There are a large number of classes of experimental design. They differ with respect to the relationship between the experimental treatment(s) and the measured response. The following summary table is based on Table 7.1 in Brown and Rothery (1993).


Population characteristics

Experimental aims

Sampling design

Experimental design

Homogeneous random variation

Estimating and comparing population parameters

Simple random sampling

Completely randomised design

Heterogeneous with systematic and random variation

Estimating and comparing means

Stratified random sampling

Randomised block design

Trends

Analysis of pattern and process

Systematic sampling

Response surface

Repeated measures

Factorial structure

Estimating and comparing means for combinations of factors

Factorial designs

Factorial designs

Dependence relationship

Prediction of a value from a single predictor

Simple random sampling

Regression analysis

 

Prediction of a value from more than one predictor

 

Multiple regression

 

 




Understand the difference between precision and accuracy
Statistical "power" and avoiding environmental harm

Make sure the program is understood.

  1. Establish working Relations--Establish a working relation with Federal, State, Tribal, local, academic, and private agencies that collect and use water-quality information. If the agency has many programs, then integrate the individual monitoring programs into overall program goals.
  2. Incorporate needs of others--If possible, incorporate needs of other agencies into the purposes of the program. Ensure the inclusion of data qualifiers with stored data so others know the accuracy and precision of the environmental data that was collected and analyzed



Methods Comparability
Implementation.

  1. Establish and document sites--Construct wells, shelters, gauge houses, staff gauges, and other needed structures as needed in preparation for data collection;. document ancillary data for sites.
  2. Collect data--Collect data according to monitoring design and protocols; coordinate with other agencies where appropriate.
  3. Review results--Review data-collection activities to ensure that protocols and QA plan are being followed and that data is complete and meets stated purposes.
  4. Store and manage data--Archive data in such a manner that the accuracy and precision are maintained.
  5. Share data--Provide data for other agencies upon request.
  6. Summarize data--Provide data-summary information to managers when applicable



Information Automation, Accessibility, and Utility

Quality Assurance/Quality Control
Quality-assurance plan--Develop a quality-assurance plan (QA) plan that documents data accuracy and precision, representativeness of the data, completeness of the data set, and comparability of data relative to data collected by others.

Interpretation.

  1. Data reliability--Define the accuracy and precision of environmental data by using quality-control data.
  2. Interpret data to meet stated purposes--Interpret the data, which include a description of the water- resources system, by using existing environmental and ancillary data to provide information useful to making water-quality-management decisions.
  3. Statistical methods and model documentation--Use statistical packages and deterministic models that are well documented.
  4. Management alternatives--Test management alternatives when they are known.
  5. Coordinate interpretations--Consider management alternatives when interpreting data to meet the needs of collaborators and customers.



Assessment and Reporting
Evaluate monitoring program.

  1. Meet goals and objectives--Determine if monitoring program goals and objectives were met.
  2. Identify problems--Identify any monitoring problems associated with collecting and analyzing samples; storing, disseminating, and interpreting data; and reporting the information to managers and the public.
  3. Evaluate costs--Evaluate the costs of the monitoring program relative to other costs, such as clean up, lost environment, and product produced.
  4. Feedback--Use results of evaluation monitoring program to identify current and future needs and activities of agencies and data users.

  1. Coordinate--Participate in the distribution of information to and with other agencies.
  2. Write and distribute technical reports--Describe current water-quality conditions, spatial distribution, temporal variability, source, cause, transport, fate, and effects of contaminants to humans, aquifers, and ecosystems as appropriate.
  3. Communicate with multiple audiences--Write lay reports or executive summaries for nontechnical audiences and peer review reports for technical audiences.
  4. Make presentations--Make presentations to assist management and the public in understanding the significance of results.
  5. Make data available--Provide basic data for other data users as requested.





   
 

Hosted by Keysoft Pty Ltd