Monte Carlo sampling is the least sophisticated of the sampling methods discussed here, but is the oldest and best known. Monte Carlo sampling got its name as the code word for work that von Neumann and Ulam were doing during World War II on the Manhatten Project at Los Alamos for the atom bomb where it was used to integrate otherwise intractable mathematical functions (Rubinstein, 1981). However, one of the earliest examples of the use of the Monte Carlo method was in the famous Buffon's needle problem where needles were physically thrown randomly onto a gridded field to estimate the value of *p*. In the beginning of the 20th century the Monte Carlo method was also used to examine the Boltzmann Equation and in 1908 the famous statistician Student (W.S. Gossett) used the Monte Carlo method for estimating the correlation coefficient in his t-distribution.

Monte Carlo sampling satisfies the purist's desire for an unadulterated random sampling method. It is useful if one is trying to get a model to imitate a random sampling from a population or for doing statistical experiments. However, the randomness of its sampling means that it will over- and under - sample from various parts of the distribution and cannot be relied upon to replicate the input distribution's shape unless a very large number of iterations are performed.

For nearly all risk analysis modeling, the pure randomness of Monte Carlo sampling is not really relevant. We are almost always far more concerned that the model will reproduce the distributions that we have determined for its inputs. Otherwise, what would be the point of expending so much effort on getting these distributions right? Latin Hypercube sampling addresses this issue by providing a sampling method that *appears* random but that also guarantees to reproduce the input distribution with much greater efficiency than Monte Carlo sampling.