The data you use to quantify parameters in your model may come from a variety of sources: scientific experiments; surveys; computer databases; literature searches; even computer simulations. Before committing to using a data set, you should convince yourself of the quality of the data. We offer the check list below to help you:
Is the past relevant?
If the data you are using come from historic experience, you implicitly assume that the world in which the data were observed are the same as now. Are there any tests that you can perform to see whether such an assumption is correct? If you know that things have changed, perhaps you could estimate some correction factor. For example, maybe you are running a railway network and are looking at historic delay events (when trains are held up). If there is a strong increase in the number of services you will be offering next year, or if you have some plan to work on a main line for a significant period, historic rates would need to be modified. Sometimes there is a proportional relationship that you could use, but often it is non-linear in which case you may find that historic data help you estimate the relationship.
Are the data a representative and random sample?
"Representative' and "random' both need to be thought about. Samples can be representative, but also deliberately non-random. For example, if your country has 10 departments, you might deliberately take a survey of 100 people in each department, irrespective of its size. Samples can also be random but not representative: for example a random survey within one department may well not be representative of the whole population.
Are the data relevant to the current problem?
A problem we often come across is that people have worked so hard to collect a set of data that they are quite determined to use it. You need to be objective about what model can be constructed to inform decision-makers, and that may not use the hard-won data. In some situations, there can be a lot of pressure to use certain types of data to validate, for example, the existence of a research program.
Is parameter independent of others in the model?
If you are using data to estimate some model parameter, you will need to ask whether that parameter is independent of others in your model, and whether it would be possible to test that independence by analyzing, or collecting further, data.
What quality checks can you do?
It is such a big question, and the answers should be fairly apparent if you pose the question. Can you think of ways in which the data can be inaccurate? Rounding errors, and biases towards numbers like 1 to 10, 20, 30, etc. rather than 17 can be important.
Are there incentives to the correct or incorrect reporting of data?
In our experience, this is the most important question because it is quite pervasive and the most difficult to recognize. We offer some example from our experience – it might help you develop the required cynicism:
A microbiological lab that kept reporting its findings even though they knew their method was flawed;
National statistics of oil imports that showed a 24000 ton import in one cargo of an oil used in electronics, because the ship's captain had some 120 different oil categories to chose from when making his customs declaration;
A survey of doctors asked how often they requested a stool sample from patients exhibiting diarrhea. Their answer was about 3 times higher than randomly surveyed people who suffered diarrhea, and went to the doctor. Either the doctors inflated the figure because they knew they should have asked, or patients lowered the figure because they were embarrassed not to have complied;
Companies (and researchers) that submitted to a regulator (and for publication) only data from experiments that supported their claim, and not those that did not;
A peer reviewed, published research paper that talked about fish that were tested (in their hundreds) but strangely only quoted a fraction (not the numerator and denominator) of those that responded to the treatment. It turns out that all but three had died because the fish farm they used went into receivership, and the fish didn't get fed for weeks. The paper was published to complete a PhD thesis (so time pressure was involved) and featured a world renowned fish pathologist (the supervisor) as an author;
A company that claimed difficulty in finding/collating last year's (really bad) sales data for its operations until the purchaser was too committed to back out;
A fellow risk analyst who quoted parts of papers (even parts of sentences) that supported his client's position, and produced bogus but highly complicated models, then presented the results of those models as if they were fact rather than the result of (biased) conjecture;
Not our own experience: employees in a nuclear fuels reprocessing company copy/pasted columns of measurements rather than go through the tedious practice of measuring each reprocessed batch;
A diamond mining company that got low paid workers to go deep into the mine shaft and bring back rock samples, to help better correlate the geology with models. The workers broke off rock from near the entrance rather than bother making the long trip.
Students doing surveys being paid for each completed form. They gather friends, have some beers, and fill out a few dozen each. Much better than being in the rain, and more money.
Systematic and non-systematic errors
The collected data will at times have measurement errors that add another level of uncertainty. In most scientific data collection, the random error is well understood and can be quantified, usually by simply repeating the same measurement and reviewing the distribution of results. Such random errors are described as non-systematic. Systematic errors, on the other hand, mean that the values of a measurement deviate from the true value in a systematic fashion, consistently either over- or under-estimating the true value. This type of error is often very difficult to identify and quantify. You may be able to estimate the degree of suspected systematic measurement error by comparing with measurements using another technique that is known (or believed) to have little or no systematic error.