Message-ID: <2124599180.16156.1594491664029.JavaMail.confluence@modelassist.epixanalytics.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_16155_1994666036.1594491664029" ------=_Part_16155_1994666036.1594491664029 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Introduction - Bayesian Statistics

# Bayesian infer= ence

Bayesian inference models the process= of learning. That is, we start with a certain level of belief, however vag= ue, and through the accumulation of experience, our belief becomes more fin= e-tuned. Some people take a dislike to Bayesian inference because it is ove= rtly subjective and they like to think of statistics as being objective. We= don't agree, because any statistical analysis is necessarily subjective as= a result of the need to make assumptions, but also because many approximat= ions are accepted without question, even without warning that they have bee= n made. For that reason we appreciate the extra transparency of Bayesian in= ference, but it also frequently provides answers where classical statistics= cannot. Perhaps more importantly for us as risk analysts, Bayesian inferen= ce encourages our clients to think about the level of knowledge they have about their problem, and what that means to them.

Bayesian inference is an extremely powerful technique, based on Bayes' Theorem (sometimes called  Bayes' Formula), = for using data to improve one's estimate of a parameter. There are essentia= lly three steps involved:

1. Constructing a confidence distribution of the parameter = before analyzing the new data set. This is called the prior distribution;

2. Find an appropriate likelihood f= unction for the observed data; and

3. Modify the prior distribution using the likelihood funct= ion to get a revised estimate known as the posterior distribution.

=20 =20 =20 =20
=20 =20 =20 =20 =20 =20 This section starts with some introdu= ctory topics:

Introducti= on to the concept and some simple examples

How to det= ermine prior distributions

How to determine = likelihood functions

We then turn to the actual execution = of a Bayesian inference for risk analysis models with Crystal Ball by looki= ng at various techniques for arriving at the posterior distribution:=

Con= struction method

Conjugate prior method

Simulation with accept/re= ject method

Markov Chain Monte Car= lo method

Most important of all, we offer a num= ber of worked examples:

Examples of Bayesian infer= ence calculations

General estimation problems

Identifying a weighted coin

A simple Bayesian inference example using= const= ruction

Tigers in the jungle

Simple= construction model showing the interaction between likelihood functions an= d informed priors

Gender of a random sample of people

A simp= le construction model illustrating the importance of the prior distribution=

A model to show how to incorporate hyperp= arameters by simulation, as well as offering both simulation and constructi= on approaches to determining the posterior distribution

The Monty Hall problem

A fun = model of a classic problem showing the sometimes non-intuitive nature of th= e Bayesian result

Using cow pats to estimate inf= ected animals in a herd

A bit = gross, but this example shows how you can perform Bayesian inference calcul= ations with very involved stochastic processes (i.e. very complicated likel= ihood functions) by using Monte Carol simulation with accept/reject criteri= a.

= Bayesian estimation of a components mean time to failure MTTF

A simp= le construction example that shows how we use data that describe being abov= e or below a threshold, instead of exact observations

Taylor series approximation to a Bayesian= posterior distribution

Showin= g how Taylor series expansion lets you determine the normal approximation t= o posterior distributions, and a method for algebraically obtaining the sta= ndard deviation

Normal approximation to the Beta posterior distribution

Exampl= e of a Taylor series expansion

Two common statistical problems

A standard statistics problem with the sa= me outcome as the classical method

Bayesian estimate of the mea= n of a Normal distribution with known standard deviation

Another standard statistics problem with = the same outcome as the classical method

Once you have reviewed t= he material in this section, you might like to test how much you have learn= ed by taking the self-test quiz:

A quiz on Bayesian infer= ence:

------=_Part_16155_1994666036.1594491664029--