NegBinomial(p,s)

Crystal Ball parameter restrictions

The Negative Binomial distribution estimates the *total number of trials* there will be before *s* successes are achieved where there is a probability *p* of success with each trial. The total number of trials is equal to the number of failures *plus* the s successes. Examples of the Negative Binomial distribution are shown below:

#### Uses

##### Binomial examples

The NegBinomial distribution has two applications for a binomial process:

The total number of trials n in order to achieve s successes = NegBinomial(p,s);

The total number of trials there might have been when we have observed s successes = NegBinomial(p,s+1)-1

The first use is when we know that we will stop at the sth success. The second is when we only know that there had been a certain number of successes.

For example, a hospital has received a total of 17 people with a rare disease in the last month. The disease has a long incubation period. There have been no new admissions for this disease for a fair number of days. The hospital knows that people infected with this problem have a 65% chance of showing symptoms. It is also known that all people with symptoms will turn up at the hospital. How many other infected people are there out in the community? The answer is NegBinomial(65%,17+1)-(17+1). IF we knew (we don't) that the last person to be infected was symptomatic, the answer would be NegBinomial(65%,17) - 17. The total number infected would be NegBinomial(65%,17+1) -1.

##### Poisson example

The Negative Binomial distribution is frequently used in accident statistics and other Poisson processes because the Negative Binomial distribution can be derived as a Poisson random variable whose rate parameter lambda is itself random and Gamma distributed, i.e.:

Poisson(Gamma(*0*,*b,a*)) = NegBinomial(1/(*b*+1),*a*)-*a*

The Negative Binomial distribution therefore also has applications in the insurance industry, where for example the rate at which people have accidents is affected by a random variable like the weather, or in marketing. This has a number of implications: it means that the Negative Binomial distribution must have a greater spread than a Poisson distribution with the same mean; and it means that if one attempts to fit frequencies of random events to a Poisson distribution but find the Poisson distribution too narrow, then a Negative Binomial can be tried and if that fits well, this suggests that the Poisson rate is not constant but random, and can be approximated by the corresponding Gamma distribution (see here ).

#### Comments

The Negative Binomial distribution is affected by the same restrictions as those described for the Geometric i.e. p remains constant and cannot be altered by knowledge or skill gained in the tries and the distribution assumes that as many tries will be made as are found necessary to achieve *s* successes: it makes no allowance for those who would cut their losses and give up. The Negative Binomial Distribution is also known as *The Pascal distribution*, or *Binomial Waiting-Time distribution* (a distribution that runs from *s* to infinity).

The NegBinomial(p,s) distribution (e.g. a distribution that only models the number of failures) is often a good approximation to the Inverse Hypergeometric (s<<D), and is itself sometimes approximated by a Normal (s large) or a Gamma distribution (p very small).

The Negative Binomial distribution gets its name because the equation that is produced from an expansion of the expression [Q-P]-s equates to its terms.

An alternative formulation of the Negative Binomial distribution has the distribution modeling the *number of failures only* to observe s successes (instead of total number trials). In this formulation, the probability mass function is given by:

{f(x)=\left( \begin{array}{c} s+x-1 \\ x \end{array} \right) p^s (1-p)^x} |

The Excel function NEGBINOMDIST(x-s,s,p) returns the Negative Binomial probability mass function.