# Classical statistics

There are a number of traditional statistical techniques available for quantifying parameters under certain assumptions. These techniques are often considered to be exact techniques, but this is only true if the assumptions made in the statistical model are correct. Traditional statistical models have usually assumed either a binomial or Normal (Gaussian) model. The Normal distribution certainly very closely approximates a large number of distributions under certain conditions, usually when the mean is much larger than the standard deviation and so these classical statistics techniques have found very wide application. However, one needs to be cautious in using them when the assumption of normality is not well obeyed and it is often difficult to appreciate the degree of inaccuracy one is adding by such approximation. Note, therefore, that even the more classical statistical techniques are subjective, in that they start with a set of assumptions chosen by the analyst. There is no such thing as a purely objective statistical analysis, since all techniques must assume some sort of probability model to get started.

This section looks at a few of the most common classical statistics methods for estimating parameters and their uncertainty:

*Estimating the mean**m* of a Normal distribution:

When the distribution's standard deviation is known

When the distribution's standard deviation is *not* known

*Estimating the standard deviation**s* of a Normal distribution:

When the distribution's mean is known

When the distribution's mean is *not* known

*Estimating the probability p of a Binomial process:*

*Estimating the intensity**l* of a Poisson process:

Estimating the mean b of an Exponential variable:

Comparison with estimation of Poisson *l*

Estimating the parameters of a least squares regression

Comparison of two (assumed Normal) populations X, Y to determine the difference between their means and standard deviations.