Before looking at the techniques for eliciting distributions from an expert, it is very useful to have an understanding of the biases that commonly occur in subjective estimation. The analyst should bear in mind the following heuristics that the expert may employ when attempting to provide subjective estimates, as they can result in systematic bias and errors. These biases are explained in considerably more detail in Hertz & Thomas (1983) and in Morgan & Henrion (1990): the latter includes a very comprehensive list of references. Dror (2020) also provides additional perspectives and relevant references.
This is where the expert uses his or her recollection of past occurrences of an event to provide an estimate. The accuracy of the estimate is dictated by the expert's ability to remember past occurrences of the event or how easily s/he can imagine the event occurring. This may work very well if the event is a regular part of the expert's life, e.g. how much s/he spends on petrol. It also works well if the event is something that sticks in the expert's mind, e.g. the probability of having a flat tire. On the other hand, it can produce poor estimates if it is difficult for the expert to remember past occurrences of the event: for example, the expert may not be able to confidently estimate the number of people s/he passed in the street that day since s/he would have no interest in noting each passersby. Availability can produce overestimates of frequency if the expert can remember past occurrences very clearly because of the impact it had on the expert. For example, if a computer manager was asked how often the mainframe had crashed in the last two years, s/he might well overestimate the frequency because s/he could remember every crash and the crises they caused, but because of the clarity of the expert's recollection ("it seems like only yesterday"), include some crashes that happened well over two years ago and therefore overestimate the frequency as a result.
The availability heuristic is also affected by the degree to which we are exposed to information. For example: one might consider that the chance of dying in a driving accident was much higher than dying from stomach cancer, because car crashes are always being reported in the media and stomach cancer fatalities are not. On the other hand, an older person may have had several acquaintances who have died from stomach cancer and would therefore offer the reverse opinion.
One type of bias is the erroneous belief that the large scale nature of uncertainty is reflected in small scale sampling. For example, in the National Lottery, many would say I had no chance of winning if I selected the consecutive numbers 16, 17, 18, 19, 20 and 21. The lottery numbers are randomly picked each week so it is believed that the winning numbers should also exhibit a random pattern, e.g. 3, 11, 15, 21, 29 and 41. Of course, both sets of numbers are actually equally likely.
A second type of representativeness bias is where people concentrate on an enticing detail of the problem and forget the overall picture. In a frequently cited paper by Kahneman and Tversky, described in Morgan and Henrion (1990), subjects in an experiment were asked to determine the probability of a person being an engineer based on a written description of that person. If they were given a bland description that gave no clue to the person's profession, the answer given was usually 50:50, despite being told beforehand that, of the 100 described people, 70 were lawyers and 30 engineers. However, when the subjects were asked what probability they would give if they had no description of the person, they said 30%, illustrating that they understood how to use the information but had just ignored it.
Adjustment and Anchoring
This is probably the most important heuristic of the three. An individual will usually begin the estimate of the distribution of uncertainty of a model parameter with a single value (usually the most likely value) and then make adjustments for its minimum and maximum from that first value. The problem is that these adjustments are rarely sufficient to encompass the range of values that could actually occur: the estimator appears to be "anchored" to the first estimated value. This is certainly one source of over-confidence and can have a dramatic impact on the validity of a risk analysis model.
Other sources of estimating inaccuracy
There are other elements that may affect the correct assessment of uncertainty and the analyst should be aware of them in order to avoid unnecessary errors.
Not an expert
Sometimes a person can be wrongly nominated as an expert for organizational or political reasons rather than actual technical expertise.
Culture of the organization
The working environment of people might impact their estimation accuracy. For example, sales people will often provide unduly optimistic estimates of future sales because of the optimistic culture they work in. Bench scientists tend to focus on much more details, whereas scientists working with populations tend to think in more probabilistic terms.
Sometimes the expert will have a vested interest in the values that are submitted to a model, potentially biasing their estimates. Years ago, we performed an expert opinion to rank several hazards using conjoint analysis, a method that presents scenarios rather than individual attributes. The idea is that by presenting full scenarios, one obtains a more balanced ranking of attributes (in this case, hazards). The problem was that the experts selected were all executives who already knew that they wanted to focus primarily on a single hazard. The result was that they gave a max score (10) to the scenarios with the hazard they cared about, and 0 score to any scenarios excluding it. Evidently, that information was ultimately not used for the risk assessment, and was a waste of resources. But provided a great example for this section!
Unwillingness to consider extremes
The expert will frequently find it difficult or be unwilling to envisage circumstances that would cause a variable to be extremely low or high. The analyst will often have to encourage the development of such extreme scenarios in order to elicit an opinion that realistically covers the entire possible range. This can be done by the analyst dreaming up some examples of extreme circumstances and discussing them with the expert.
Eagerness to say the right thing
Occasionally, the interviewee will be trying to provide the answer s/he thinks the analyst wants to hear. For this reason, it is important to avoid questions that are leading and never to offer a value for the expert to comment on.
Units used in the estimation
People are frequently confused between the magnitudes of units of measurement. For example, some experts may be used to thinking of distances in miles and liquid volumes in (UK) gallons and pints. If the model uses SI units, the analyst should let the expert estimate in the units in which s/he is comfortable and convert the figures afterwards.
Expert too busy
People always seem to be busy and under pressure. A risk analyst coming to ask a lot of difficult questions may not be very welcomed. The expert may act brusquely or give the whole process lip-service. Obvious symptoms are when the expert offers over-simplistic estimate like X +/- Y% or minimum, most likely and maximum values that are equally spaced for multiple or all estimated variables.
Belief that the expert should be quite certain
It may be perceived by the expert that assigning a large uncertainty to a parameter would indicate a lack of knowledge and thereby undermine his/her reputation. The expert may need to be reassured that this is not the case. An expert should have a more precise understanding of a parameter's true uncertainty and may, in fact, appreciate that the uncertainty could be greater than the lay person would have expected..