If a random variable X is continuous, i.e. it may take any value within a defined range (or sometimes ranges), the probability of X having any precise value within that range is vanishingly small because a total probability of 1 must be distributed between an infinite number of values. In other words, there is no probability mass associated with any specific allowable value of X. Instead, we define a probability density function f(x) as:
f(x)= \frac{d}{dx} F(x) |
i.e. f(x) is the rate of change (the gradient) of the cumulative distribution function. Since F(x) is always non-decreasing, f(x) is always non-negative.
For a continuous distribution we cannot define the probability of observing any exact value. However, we can determine the probability of lying between any two exact values (a, b):
P(a \leq x \leq b)=F(b)-F(a) |
where b > a
Example
Consider a continuous variable that is takes a Rayleigh (1) distribution. Its cumulative distribution function is given by:
F(x)=0 \qquad x<0 \\ F(x)=1-e^{-\frac{x^2}{2}} \qquad x>0 |
and its probability density function is given by:
f(x)=0 \qquad \; x<0 \\ f(x)=xe^{-\frac{x^2}{2}} \quad x>0 |
The probability that the variable will be between 1 and 2 is given by:
p(1<x<2)=F(2)-F(1)=\big(1-e^{-2}\big) -\big(1-e^{-0.5}\big) \approx 47.12 {\circ / \circ} |
F(x) and f(x) for are plotted below: