Bayesian updating formula
Imagine my trusted friend caught the end of Brown’s warm-up and saw her take two shots, making one and missing the other, and she tells me this information.
This would mean I could reasonably use the common Beta(1, 1) prior, which represents a uniform density over [0, 1].
This prior says that Brown’s shooting rate is probably near the extremes, which may not necessarily reflect a reasonable belief for someone who is a college basketball player, but it has the benefit of having less influence on the posterior estimates than the uniform prior (since it is equal to 1 prior observation instead of 2).
Jeffreys’s prior is popular because it has some desirable properties, such as invariance under parameter transformation (Jaynes, 2003).
This prior would be recommended if you had extremely scarce information about Brown’s ability.
Is Brown so good that she makes nearly every shot, or is she so bad that she misses nearly every shot?
This means that if you have binomial data you can use a beta prior to obtain a beta posterior.
If you had normal data you could use a normal prior and obtain a normal posterior.
The course closes with a look at calculating Bayesian probabilities in Excel. This course qualifies for professional development units (PDUs).
To view the activity and PDU details for this course, click here.
Likelihoods are a key component of Bayesian inference because they are the bridge that gets us from prior to posterior.
In this post I explain how to use the likelihood to update a prior into a posterior.