Experts will sometimes produce profoundly different probability distribution estimates of a parameter. This is usually because the experts have estimated different things, made differing assumptions or have different sets of information on which to base their opinion. However, occasionally two or more experts simply genuinely disagree. How should the analyst approach the problem?

The difference in opinion is another source of uncertainty, so should not be discounted by for example, taking the average of the opinions, or the largest/smallest. Instead, one needs to create a composite distribution that reflects the range and emphasis of each opinion and our confidence in the estimators.

The technique we employ is to use a Discrete distribution, where the {*xi*} are the expert opinions and the {*pi*} are the weights given to each opinion according to the emphasis one wishes to place on them. In this case, the input of the discrete distribution is however changing every iteration ('x' is for example a sample of a PERT distribution) and therefore the discrete distribution has to update its input every iteration. This is also called dynamic referencing. Implementing this feature is a bit different across various simulation software packages. For example, in Crystal Ball, the CB.Functions have the disadvantage that they are not well described in the software's manual, their output is not collected, they cannot be correlated, and they are not included in the software's output reports and sensitivity analysis.

The figure below illustrates example results from combining three differing opinions but where expert A (brown line) is given twice the emphasis of the others due to that person's greater experience.

Let's consider another example: Imagine that we have an important uncertain value in our model and that we ask 3 experts to estimate it. All of these 3 people have got the same information and it has been widely disseminated, but they're asked to estimate the parameter that we need to put in our model separately. They don't sit together and decide what value they think the parameter should take. Instead, they discuss the information available together, and then separately estimate it. So we have 3 different estimates, 2 of which are PERT and one is a general distribution which has got a customized shape to it. These three distributions are plotted together below:

In the Combining Opinions model, you can see the difference, and we have decided to weight expert B twice as much as expert A or C. We could have given them equal weightings but this is an example to show when we have more faith in B for example, because perhaps s/he is closer to the project or perhaps s/he's just more experienced. In the model, we want to look at the combined estimate. The combined estimate uses a discrete distribution with the 3 opinions and 3 weights.

An often used but incorrect way of combining expert opinions is to multiply the expert opinions by their weights, sum the results and divide the outcome by the sum of the weights to normalize it.

The figure below shows the results of the two ways of modeling the combined expert opinion (correct and incorrect):

The incorrect way is wrong is because the formula calculates a weighted average of the 3 opinions and so it will always pick a value in the center and will not give the same degree of spread as we want to recognize by showing these 3 opinions with the discrete distribution. The reason why we want to show such a spread from using the discrete distribution is because at least one person believes (in this example all of the experts) that the true value should be as low as 3 and at least one person believes that the maximum possible value for the parameter is 10. As we see from the graph above, the incorrect way of modeling does not allow these values.

The links to the Combining Opinions software specific models are provided here: