|Risk is the chance of an adverse event with specific consequences occurring within a certain timeframe. Risk assessment is a tool to facilitate informed decision-making. The process of risk assessment is based on quantifying the probability of an uncertain, undesirable event occurring, either now or in the future.
Perception of risk is very much an individual matter, and is affected by a variety of factors. Our personal experience and beliefs have a strong influence on our perception of risk. Cultural differences also contribute substantially to perceptions and acceptance of risk, so that different social groups react differently when confronted by the same hazards. Furthermore, individuals do not necessarily feel the same way about any given risk from day to day. Other factors affecting our perception of risk include how much control we have over the hazard in question, how equitably the risk is distributed, and how well we understand the technical details. Peter Sandman has described the equation for risk management as:
Risk = Hazard + Outrage,
where hazard is the technical assessment of risk. The implication is that outrage management is a key component of risk communication so that the perception(s) of the community and stakeholders align(s) with the actual hazard.
In general, the less control and understanding we have, the greater we perceive the risk to be (motor vehicles vs nuclear power). Additionally, people are generally poor judges of probabilistic events, tending to substantially underestimate the risk of events that have a high probability of occurrence, and overestimate the risk of low probability events (motor vehicle crashes vs nuclear power plant explosions). This framework emphasizes the benefits of quantitative risk assessment over qualitative risk assessment for the reason that the latter are subjective, and are influenced by the biases (mostly unacknowledged) of the risk assessor.
Psychological biases that are often overlooked in subjective risk assessment include:
the way an issue is presented may influence our behaviour even when the underlying outcome is unchanged;
people are generally overconfident in assessing the quality and reliability of their own judgements (experts vs community);
people have a tendency to be influenced by initial estimates, either their own or those of others;
people have a tendency to be influenced by recent and memorable events, and
most people, including experienced scientists, draw inferences from data to an extent that can only be justified with much larger samples, and often overestimate the risk of multi-factored events (e.g. assign the probability of the most obvious event to the whole sequence, which will always be less likely).
The risk assessment and management framework involves a number of key steps (Figure 1), including:
Defining the problem - this involves careful scoping of the problem, agreement on how it is to be assessed, and how the acceptability of actions will be judged.
Deciding on the important values, and identifying hazards to these values - hazards are prioritised by evaluating their effects on valued elements of ecosystems and ecosystem services.
Analysing the risks to the identified values - the analysis process used needs to be appropriate for the situation in order to provide adequate information for decision making. Guidance is provided on both qualitative and quantitative methods.
Characterising the risks - the technical details of risk analyses needs to be made accessible to decision-makers and broader stakeholders. In particular, the uncertainties and assumptions associated with analyses require careful and transparent documentation.
Making decisions - selection of the best management option or strategy will be the one that results in the effective minimization of the ecological risks, while also being cost-effective and acceptable to the stakeholders.
Managing the risks - a risk management plan provides recommendations on managing or mitigating all high or unacceptable risks. The risk management plan should include a robust program to monitor progress to ensure the strategies are working, and a review and feedback process for making changes if needed.
A major difficulty faced by many (most) managers of aquatic and terrestrial resources is the need to make decisions for situations where there is considerable uncertainty in understanding how the system works and how particular management actions will influence the system. It is rare to have well understood cause-effect relationships between the threats and the ecosystem.
Public health risks
The following is a checklist of recommendations for water managers for assessing microbiological water quality (Roser and Ashbolt, 2004):
1) Sample numbers, variance and benchmarks
To account for inherent data Variability and Uncertainty, there is little value in taking 1 or 2 measurements of pathogens or indicators for the overall assessment of water quality. Spot measurements, however may be useful under certain situations (e.g. detection of event based impacts) once the range in expected water quality has been identified. A guide to the range of key analytes by catchment type is provided in Chapter 10 of the Technical Report, and it is recommended that lognormal distributions are appropriate to describe their variation (i.e. described by the mean and standard deviation of log 10-transformed data); and
The development of 'baseline' water quality status summaries is strongly recommended to provide a reference point for assessing the impact of events on a water body. The fully-protected catchments described in this report provide the ultimate baseline expected for Australian catchments.
2) Hydrologic information is an essential complement to all quality data.
a) In surface water catchments, measurement of flow/run-off is preferable to rainfall. The issue may be addressed in the following ways:
collect existing documentation on your system;
identify rainfall and hydrographic stations generating data for, or nearby, the catchment and acquire it;
install an automatic flow monitoring device where river height and flow can be estimated from changes in water pressure/depth as needed (such in-stream monitors are relatively inexpensive and widely available);
groundwater hydrological data should always be complemented by an assessment of the likely direction of flow and locations of contaminant sources; and
develop contingency plans for collecting extreme event data.
(Note. There was no trend in any catchment studied indicating increasing dilution with increasing event run-off volume. Thus it must be assumed for the time being, that flux is likely to increase and concentration may also, so large, very large and extreme run-off events require special response plans.)
3) Indicators to use
a) E. coli and enterococci concentrations behaved similarly in surface waters and the two were generally present concurrently. So for detection of general faecal inputs either can probably be used. As enterococci are reputedly more robust and less likely to bloom they may be preferable where there is no historical coliform data set for
recreational water quality risk assessment. Most thermotolerant coliforms were confirmed as E. coli, indicating that such historic coliform data can probably be used for comparisons with current E. coli levels;
b) In contaminated surficial groundwater, enterococci appear to be more common/persistent than E. coli and are recommended as the preferred indicators; assay e.g. at least 1 litre to improve sensitivity for assessing potable use is also recommended;
c) C. perfringens appeared to be abundant where human sewage impacts were likely (> 100 cfu./100mL ) and have potential as an indicator of human impacts. As a consequence it is suggested as a follow-up test where faecal contamination is suspected. It is also suggested as an alternative to enterococci/E. coli counts for remote sites where conventional testing is impractical;
d) For identifying unequivocally whether a faecal source has a significant human component, faecal sterol analyses complemented by C. perfringens, sanitary surveys and flow load estimation are suggested;
e) Somatic coliphages appeared to be completely absent from protected waters and very abundant in cattle impacted catchments, indicating they may be useful in detecting agriculture impacts on otherwise protected catchment. An additional use might be as surrogates of virus impacts (e.g. on water treatment effectiveness after heavy
f) F-RNA coliphages were reported to be rare in surface waters. Their numbers in septic tank supernatant proved to be too low to measure transport beyond the unsaturated zone. They are not recommended for general use septic seepage studies, but are generally abundant in sewage and hence may be suitable for municipal sewage impact studies. Conversely, their low background numbers in aquifers indicate they may be useful as a virus tracking marker in deliberately spiked experiments; and
g) Despite its well-documented limitations, the H2S test appeared to detect groundwater contaminant gradients as effectively as conventional bacterial indicators. Thus it is recommended that, where access to conventional testing is limited, this assay may be used for risk assessment purposes provided study designs take account of the source
water environment more generally and the limitations of the assay.
4) Pathogens to sample
a) Cryptosporidium oocysts were frequently detected in the four unprotected catchments. Consideration should be given to measuring source water loads when assessing the adequacy of treatment barriers;
b) Giardia cysts seemed to be only a secondary concern in source waters and were typically present in much lower concentrations than the hardier oocysts of Cryptosporidium. Hence, their management should generally be seen as a secondary concern to that of Cryptosporidium;
c) Campylobacter species appeared to be universally present and easily detected confirming the need forprecautionary routine disinfection of all surface waters prior to potable use;
d) Human enteric viruses were not directly assayed (due to limited funds), however, the persistence of various types of bacteriophages (e.g. coliphages) may provide a useful model of their transport and removal from impacted waters; and
e) Faecal sterols provided load/concentration estimates of human sewage and herbivore faecal input so given ranges of pathogens in each source type, they may be used as an index for key pathogens.
5) Quality control
a) Any source water quality study for risk assessment or other purposes must address the issue of data quality in respect to water quality analytes and hydrology. Measurement of replicates, blanks and spikes (recovery) are recommended for all analytes. In the case of hydrologic data, flow measurements should be treated with caution regarding measurement accuracy and the precision of rating curves checked by on-site measurements at the start and end of the study. At the very least, stage heights should be validated and channel geometry assessed to evaluate the range of flows over which a rating curve is valid;
b) Recovery of Cryptosporidium and Giardia from environmental samples is highly variable. Accordingly, the routine use of recovery controls (e.g. BioBallsTM, ColorSeedTM) is strongly recommended for use with every sample to allow
transparent adjustment of sample densities to "true concentrations";
c) At the very least, recovery from an environmental sample type should be checked using spiking prior to any assays being conducted;
d) Standardised and documented microbial data interpretation, oriented toward risk assessment support, is urgently needed. Such interpretation should be developed in a way that promotes innovation. To this end, it is proposed that:
Concise measurement interpretations be developed covering not only water quality analysis but also considering hydrology and environmental conditions (e.g. station location v. contamination sources);
Interpretation be framed as questions or hypothesis tests allowing a clear answer to be produced;
Interpretation be facilitated by a Decision Support System (DSS) which includes supporting graphics, references to primary study support, caveats, reference etc. (as commenced in FaecalPrint); and
Decision support systems be designed to be open to efficient addition by the expert and user community (e.g. on-line) and peer review (e.g. periodic expert, user feedback); and
Interpretation conclusions be based on 'weight of evidence' rather than single measurements.
6) Interpretation of Source Water Concentration Data
a) Pathogen data
Fully protected forested catchments should not have >1 Cryptosporidium per 10 L on average even during small to moderate event run-off;
Highly impacted streams should have readily measurable protozoan pathogens even during dry weather.
C. perfringens densities greater than 100 cfu./100mL probably indicate human inputs especially during dry weather flow conditions and greater than 1000 cfu./100 mL almost certainly;
E. coli and enterococci should be present in similar concentrations. Ratios greater than 100 or less than 0.01 probably indicate the occurrence of a coliform bloom;
Fully protected forested streams typically have median E. coli and enterococci densities less than 100 mpn./100 mL in dry weather and less than 1000 mpn./100 mL in "small" to "moderate" size run-off event;
Where the combined
stanol concentration is greater than 50 ng./L and
stanol ratios are high (>0.5) human/herbivore contamination is almost certainly present. Where the coprostanol:24ethylcoprostanol ratio is greater than 0.5 human inputs are likely;
Somatic coliphages are not found in fully protected forests but appear to be abundant (> 1000 pfu./100 mL) in agricultural catchment run-off.
Load calculations can be used to estimate the relative inputs in dry and wet weather periods;
Partially developed catchments seem characterised by striking decreases in water quality between wet and dry conditions; and
Faecal sterol loads can be used to estimate total faecal mass mobilised during an event and, where the source is clearly identifiable, allows non point source emission rates to be quantified in auditable form.
7) Risk Assessment
a) Hazard Analysis and Critical Control Point (HACCP):
b) Quantitative Microbial Risk Assessment(QMRA):
The data collected in this project should be made widely available so as to promote the development of quantitative risk assessment for source waters;
Intermediate quantitative approaches to the use of risk assessment need to be developed while full QMRA is under development
c) Data-poor source water supplies:
Data in this report provides water quality ranges which can be used in the short term to provide targets for assessment of current quality of a range of source waters;
The land-use X source water class X flow condition risk matrix is proposed as an interim risk evaluation tool. It can be further refined once standard quantitative risk assessment procedures have been developed;
Event monitoring technology is in need of further development with economics and practicality in mind. Some enhancements worth investigating include the use of stable recalcitrant indicators (e.g. bacteriophages; Clostridial spores), and rising (and falling) hydrograph samplers;
Event monitoring requires the development of locally adapted sampling protocols;
e) Source Identification, Tracking & Tracing and Environmental Forensics
Faecal sterols were the most promising technology of those considered. Data interpretation would be enhanced by concurrent indicator sampling and analysis of flow rates and paths; and
No tool/approach is conclusive in isolation, and studies should include a range of analysis technologies and sanitary assessments in a complementary fashion, as advocated in the FaecalPrint DSS.
8) Risk Management
Urban/intensive agricultural development appeared to degrade water quality by factors up to 1000. Put another way, development of 1% of a drinking water catchment can easily double the pathogen load downstream. Hence, total exclusion in current fully protected catchments should be maintained;
b) Streams and Rivers
Streams and rivers, especially downstream of developed areas during events, are the surface sources which pose the highest risk to consumers. For all systems, wet weather extraction policies should be developed as a matter of course. Storage of raw water for use during high run-off periods should be considered;
The input pathogen loading rates seen during events can be > 1000 000 times higher than during dry weather. Wet weather water extraction policies should be developed to minimise associated risks, noting the likelihood of reservoir short circuiting;
Vulnerable aquifers should have their microbial status evaluated particularly with regard to the potential for virus inputs;
A national policy on the inoculation of tracer microorganisms is desirable to provide guidance for experimenters aiming to estimate risk in a sustainable and safe manner;
The sterol concentrations in poorly maintained (thick surface crust) systems were 100 to 1000 times higher than the best systems indicating that such systems pose a disproportionate risk in the event of overflows than their frequency of occurrence may indicate;
The South Australian STEDs system are an important resource for quantifying the level of contaminants, including pathogens, likely to be emitted by septic tanks and should be studied further;
Event response plans are needed if event phenomena (especially rare events) are to be better understood and proactively managed. By definition, rare events will be infrequent and will probably require the development of a national contingency response/study plan; and
Until such data is collected it seems prudent to assume that the contaminant concentration in large events is comparable to or higher than small events, and assess potential impacts accordingly.