This depends on the data. There are at least two aspects to consider:
How reliable are the data (the QA/QC of the programs)?
What is the design (implicit or explicit) that will determine the appropriate analysis?
If you have no, or little, knowledge of the reliability of the data you will have to be conservative in your interpretation.
There may not have been an explicit design for the program but you need to determine what sort of design can be extracted from the program. Look at the relationship between sites (randomization, replication, before/after, controls or reference sites), the timing and frequency of the tests, and the ability to stratify the data (e.g. rainfall in the preceding 24 hours). You should do this before looking at the data; don't play with the data to see if you get something interesting out of them and then look for a post-hoc justification! You might get interesting numbers that are completely invalid because they are not justified by the design of the program. Again, the best that you might get is an hypothesis that you can test in a properly designed study.