In this post, you will learn about Bayes’ Theorem with the help of examples. It is of utmost importance to get a good understanding of Bayes Theorem in order to create probabilistic models. Bayes’ theorem is alternatively called as Bayes’ rule or Bayes’ law. One of the many applications of Bayes’s theorem is Bayesian inference which is one of the approaches of statistical inference (other being Frequentist inference), and fundamental to Bayesian statistics. In this post, you will learn about the following:
In simple words, Bayes Theorem is used to determine the probability of a hypothesis in the presence of more evidence or information. In other words, given the prior belief (expressed as prior probability) related to a hypothesis and the new evidence or data or information given the hypothesis is true, Bayes theorem help in updating the beliefs (posterior probability) related to hypothesis. Let’s represent this mathematically. Let’s understand this using a diagram given below:
In the above diagram, the prior beliefs is represented using red color probability distribution with some value for the parameters. In the light of data / information / evidence (given the hypothesis is true) represented using black color probability distribution, the beliefs gets updated resulting in different probability distribution (blue color) with different set of parameters. The updated belief is also called as posterior beliefs.
If the prior beliefs about the hypothesis is represented as P([latex]\theta[/latex]), and the information or data given the prior belief is represented as P([latex]Y | \theta[/latex]), then the posterior belief related to hypothesis can be represented as the following:
[latex]P(\theta | Y) \propto P(Y | \theta) * P(\theta)[/latex] … Eq 1
The above expression when applied with a normalisation factor also called as marginal likelihood (probability of observing the data averaged over all the possible values the parameters) can be written as the following:
[latex]P(\theta | Y) = \frac{P(Y | \theta) * P(\theta)}{P(Y)}[/latex]
The following is an explanation of different probability components in the above equation:
Conceptually, the posterior can be thought of as the updated prior in the light of new evidence / data / information. As a matter of fact, the posterior belief / probability distribution from one analysis can be used as the prior belief / probability distribution for a new analysis. This makes Bayesian analysis suitable for analysing data that becomes available in sequential order.
Here are some real-world examples of Bayes’ Theorem:
In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…
Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…
With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…
Anxiety is a common mental health condition that affects millions of people around the world.…
In machine learning, confounder features or variables can significantly affect the accuracy and validity of…
Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…