Distributions can be used to model randomness (R), inter-individual variability (V), and parameter uncertainty (U). Often models have a combination of these uses. For example, when predicting number of *s* successes from *n* Binomial trials, one could model the uncertainty in the probability of success *p** *based on observational data. The parameter *s *would be modeled as randomness using the Binomial distribution, and *p* using a distribution of parameter uncertainty such as the Beta distribution.

Sometimes all R, V, and U can be integrated into a result, but other times one needs to separate the effect of each component to see how they affect the model outcome. Separating each component is called **two-dimensional (2-D) analysis**. This method is particularly important to understand the impact of uncertainty, as this can be reduced if further information is collected.

The following section discusses ways to perform a 2-D analysis, primarily in Excel. Given the intrinsic table structure of this Excel, 2-D modeling can be laborious, so the reader accustomed to using scripting tools will quickly realize that these methods are much more amenable to be implemented using loops in a scripting language such as R, Python, or any general programming language like VBA.

### Start with a V/R model

Randomness (R) and inter-individual variability (V) are properties of the real world, and as such form the base of our risk analysis model. Uncertainty (U), i.e. the degree of knowledge we have about the parameters enumerating R and V, does not affect how the real world operates, and so U is overlaid onto a V/R model. This section discusses how to put together a V/R model. Then you should consider how to overlay uncertainty for the model's parameters.

A risk analysis model that separates uncertainty from randomness and variability (or uncertainty and randomness from variability) is described as *second-order *or* two-dimensional (2-d) model*. A V/R model comes in two forms: calculated and simulated.

### Overlay uncertainty onto the base model

The base model built of elements of randomness (R) and variability (V) will include parameters (like a binomial probability, a population mean, a Poisson intensity) for which we need values. Almost always we will not know these parameter values precisely and rely on statistics and/or expert judgment to provide estimates and the uncertainty around those estimates. Uncertainty, then, is simply overlaid onto the V/R model.

There are a number of different approaches in which we can overlay uncertainty onto a V/R model:

##### 1. Variability and Randomness are calculated, and uncertainty is simulated (V_{C}/R_{C}/U_{S} model)

This option preserves the separation of subjective uncertainty from the physical model. It is the easiest to understand, but the base V/R model is the most difficult to construct, so it is generally only useful for simple models.

##### 2. Variability is calculated, Randomness and Uncertainty are simulated (V_{C}/R_{S}/U_{S} model)

This option will blend uncertainty and randomness together as they are being simulated together. It is quick to construct the model, quick to simulate, but one loses the ability to separately analyze the random and uncertainty components in second order plots. It's a small loss for most applications, and a model that simulates uncertainty and randomness together can easily be extended to make them separate. Therefore, in general we recommend this approach most. It has the added advantage that one can see the interaction between probability and uncertainty distributions that is not possible with a V/R calculation model.

##### 3. Variability, Randomness and Uncertainty are simulated together (V_{S}/R_{S}/U_{S }model)

This option may be necessary when there is a great deal of variability that you need to include in your model, but it is full of potential traps.

##### 4. Variability is calculated, Randomness is simulated and Uncertainty is simulated in second loop (V_{C}/R_{S}/U_{L} model)

Although somewhat unwieldy, this model structure allows you to get the greatest benefit from simulation to avoid complicated probability maths, but keeps you away from the difficult area of simulating Variability. The Two-dimensional Simulation Tool in Crystal Ball allows you to automate the task, running an outer loop to simulate the uncertainty, and then freezing the uncertainty values while it runs an inner loop to simulate randomness (R) and variability (V).

##### 5. Variability is simulated, Randomness is simulated in 1st loop and Uncertainty is simulated in 2nd loop (VS/RL1/UL2 model)

In principle this type of model makes logical sense:

Select an individual with specific characteristics from variability distributions and place values for variability parameters into model (start loop 1);

Draw values from each uncertainty distribution for all uncertain parameters and place in risk model (start loop 2);

Simulate model and save results;

Repeat steps 2 and 3 until to get a 2nd order distribution for the selected individual (end loop 2);

Select another individual from the variability distributions, and repeat steps 1 to 4 to get a sufficient representation of the variability of the population (end loop 1)

In practice, such a model would probably be better undertaken using a modeling platform other that Excel.