Common error 1:
When we first start thinking about risk, it is quite natural to want to convert the impact of a risk to a single number. For example, we might consider that there is a 20% chance of losing a contract, which would result in a loss of income of $100,000. Put together, a person might reason that to be a risk of some $20,000 (ie 20% * $100,000). This $20,000 figure is known as the "Expected Value" of the variable, i.e. one of several measures of the location of the distribution (For a dataset, the mean is the arithmetic average of all values. For a probability distribution, the mean is the sum of all possible values weighted by their probability). It is the probability weighted average of all possible outcomes. So, the two outcomes are $100,000 with 20% probability and $0 with 80% probability:
Mean risk (expected value) = 0.2*$100,000 + 0.8*$0 = 20,000
Figure 1 shows the probability distribution for this risk and the position of the expected value.
Figure 1: Distribution for risk with 20% probability and $100,000 impact.
Calculating the expected values of risks might also seem a reasonable and simple method to compare risks. For example, in the following table, risks A to J are ranked in descending order of expected cost:
Impact if occurs $000
Expected impact $000
Total expected impact
If a loss of $500,000 or more would ruin your company, you may well rank the risks differently: risks C, D, I and, to a lesser extent, J pose a survival threat on your company. Note also that you may value risk C as no more severe than risk D because if either of them occur your company has gone bust.
On the other hand, if risk A occurs, giving you a loss of $400k, you are precariously close to ruin: it would just take any of the risks except F and H to occur (unless they both occurred) and you've gone bust.
Looking at the sum of the expected values gives you no appreciation of how close you are to ruin. How would you calculate the probability of ruin? The model solution can be found at Probability of Ruin. Figure 2 plots the distribution of possible outcomes for this set of risks.
Figure 2: Probability distribution of total impact from risks A to J
The links to the Probability of Ruin software specific models are provided here:
Why calculating the expected value is wrong
From a risk analysis point of view, by representing the impact of a risk by its expected value we have removed the uncertainty (i.e. we can't see the breadth of different outcomes), which is a fundamental reason for doing risk analysis in the first place. That said, you might think that people running Monte Carlo simulations would be more attuned to describing risks with distributions rather than single values but this is nonetheless one of the most common errors.
Another, slightly more disguised example of the same error is when the impact is uncertain. For example, let's imagine that there will be an election this year and that two parties are running: the Socialist Democrats Party and the Democratic Socialists Party. The SDP are currently in power and have vowed to keep the corporate tax rate at 17% if they win the election. Political analysts reckon they have about a 65% chance of staying in power. The DSP promise to lower the corporate tax rate by one to four percent, most probably 3%. We might choose to express next year's corporate tax rate as:
Rate = 0.35*PERT(13%,14%,16%) + 0.65*17%
Checking the formula by simulating, we'd get a probability distribution which could give us some comfort that we've assigned uncertainty properly to this parameter. However, a correct model would have drawn a value of 17% with probability 0.65 and a random value from the PERT distribution with probability 0.35. How could you construct that model? The model Simulating a Variable Risk is an example. The correct model would have considerably greater spread: the two results are compared in Figure 3.
Figure 3: Comparison of correct and incorrect modeling
of corporate tax rate for the SDP/DSP example.
The links to the Simulating a Variable Risk software specific models are provided here: