p=\frac{binomial(\frac{s}{n},n)}{n} |

We assume that each measurement point is a binomial random variable that has a probability *p* of having the characteristic of interest. If all measurements are independent, and we assign a value to the measurement of 1 when the measurement has the characteristic of interest and 0 when it does not, the measurements can be thought of as a set of Bernoulli trials. Letting *P* be the random variable of the proportion of *n* of this set of trials {*X _{i}*} that have the characteristic of interest, it will take a distribution given by:

p=\frac{Binomial(p,n)}{n} |

(1)

We observe *s* of the *n* trials with the characteristic of interest, so *s*/*n* is our one observation from the random variable *P* which is also our maximum likelihood, and unbiased, estimate for *p*. Switching around Equation 1, we can get an uncertainty distribution for the true value of *p*:

p=\frac{binomial(\frac{s}{n},n)}{n} |

(2)

This exactly equates to the non-parametric and parametric Bootstrap estimates of a Binomial probability. Equation 2 is awkward since it will only allow (*n*+1) discrete values for *p* i.e. {0, 1/*n*, 2/*n*, …, 1/(*n*-1), 1}, whereas our uncertainty about *p* should really take into account all values between zero and 1:

**Figure 1**: Example of Equation 2 estimate of p where *s* = 5, *n* = 10

It also makes no sense that p could be either zero or one, of course.