The simulation software estimates the true mean m of the output distribution by summing all of the generated values xi and dividing by the number of iterations n:
\hat{\mu} = \frac{1}{n}\displaystyle\sum_{i-1}^{x}x_i |
If Monte Carlo simulation is used, each xi is an iid (independent identically distributed random variable). Central Limit Theorem then says that the distribution of the estimate of the true mean is given by:
\hat{\mu} = Normal \bigg(\mu,\frac{\sigma}{\sqrt n}\bigg) |
where s is the true standard deviation of the model's output.
Using a statistical principle called the pivotal method we can rearrange this equation to make it an equation for m:
| \hat{\mu} = Normal \bigg(\mu,\frac{\sigma}{\sqrt n}\bigg) | |
i.e. | \hat{\mu} = \mu+Normal \bigg(0,\frac{\sigma}{\sqrt n}\bigg) | |
So, | \mu= \hat{\mu}+Normal \bigg(0,\frac{\sigma}{\sqrt n}\bigg) = Normal \bigg(\hat{\mu},\frac{\sigma}{\sqrt n}\bigg) | (1) |
Figure 1 shows the cumulative form of the Normal distribution for Equation (1). Specifying the level of confidence we require for our mean estimate translates into a relationship between d, s, and n as you can see from Figure 1:
Figure 1
More formally, this relationship is:
\delta= \frac{\sigma}{\sqrt n} \Phi \bigg(\frac {1+\alpha}{2}\bigg) | (2) |
where Ф(•) is the cumulative distribution function for the Normal distribution. Rearranging (2) and recognizing that we want to have at least this accuracy gives a minimum value for n:
n>{\Bigg(\frac{\sigma*\Phi\big(\frac{1+\alpha}{2}\big)}{\delta}\Bigg)}^2 |
We have one problem left: we don't know the true output mean s. It turns out that we can estimate this perfectly well for our purposes by taking the standard deviation of the first few (say 50) iterations.
There are two methods you can use to determine how many iterations you need to get sufficient accuracy of the mean:
Method 1 - Calculation
The model Mean Accuracy actually shows how you can do this continuously.
Method 2 - Control feature
As is illustrated in the model Mean Accuracy we can also use the control feature. This feature works in the same way as described above, and periodically checks whether the confidence interval is less than the specified precision. When the specified precision is reached, it will automatically stop the simulation. Most simulation software packages have this feature, for example Crystal Ball has named it the Precision Control feature, or @Risk has it under the name of Convergence.
The links to the Mean Accuracy software specific models are provided here: