Bayesian inference

Reading time:  3 minutes

Bayesian inference is a method of statistical inference. It is named after Thomas Bayes, the British mathematician and pastor who first formulated Bayesian probability theory in the 18th century. It is a method of data analysis that allows the probability of certain events to be determined not only from the available evidence, but also from prior knowledge.

How does Bayesian inference work?

The basis of Bayesian inference is Bayes' theorem, which can be written as:

Where:

  • P(A∣B) is the probability of event A conditional on event B,
  • P(B∣A) is the probability of event B conditional on event A,
  • P(A) is the probability of event A without any additional conditions,
  • P(B) is the probability of event B without any additional conditions.

The primary purpose of Bayesian inference is to update our knowledge of event A, taking into account new evidence in the form of event B. In other words, we want to determine what the probability of event A is after taking into account the new information contained in event B.

To better understand how this method works, let us consider a simple example. Suppose we have a test for a disease that has some efficacy, but can also give false results. Our goal is to determine the probability that a person is really ill if the test showed a positive result.

  • P(A) is the probability that the person is really ill without any additional information.
  • P(B∣A) is the probability that the test will give a positive result when the person is really ill.
  • P(B) is the probability of a positive test result whether the person is ill or not.

We want to calculate P(A∣B), which is the probability that the person is really sick if the test gave a positive result. As we receive more information and test results, we update our knowledge of the probability that a person is sick. This allows doctors to make accurate diagnostic decisions.

What is the difference between classical and Bayesian inference?

The classical approach to probability, also called objective or physical probability, is based on frequency analysis. It does not take into account the characteristics of the phenomenon or object under study, but only ‘hard’ data, concerning random events from the number of all possible events. Such probability calculus allows the chance of a particular event occurring to be calculated.

The purpose of using Bayesian probability is also to calculate the chance of an event occurring, but in this case the probability is a measure of the subjective degree of belief in its occurrence. The starting point here is a priori probability. It is formulated before the insight, before the empirical experience of the event. Only in a second step do we update this initial belief with incoming data, arriving at the a posteriori probability (after the event has occurred).

Although in the Bayesian approach probability has a subjectivist interpretation, the assessment of the chances of a phenomenon occurring still has a fully formal way of calculating it, following the basic principles of probability calculus.

The Bayesian approach is more flexible as it allows for the incorporation of prior knowledge and continuous updating of probabilities. This is particularly useful in situations where the available data is limited or difficult to obtain. The classical approach, on the other hand, is used in situations that can be fully determined by the data available and can be easily replicated.

The choice between Bayesian and classical inference depends on the nature of the data, the objectives of the analysis and the preference for interpreting the results. The Bayesian approach is more subjective and dynamic, while the classical approach offers a more objective and stable way of evaluating hypotheses in the context of empirical data.

Classical and Bayesian inference
Classical and Bayesian inference

The analyses presented in this article were carried out using

PS IMAGO PRO

Bayesian inference in data analysis

Methods using Bayesian inference are available in most analytical solutions, including PS IMAGO PRO. Based on them, the analyst can use, among others:

  • tests for one sample with normal, binomial and Poisson distributions,
  • tests for dependent or independent samples with normal distributions,
  • Pearson correlation,
  • linear regression,
  • one-way ANOVA, including repeated measures,
  • log-linear models.

The use of these techniques in a Bayesian approach can be particularly useful:

  • in medicine: in clinical and epidemiological studies, data are often limited and prior medical knowledge is extensive and well documented,
  • in finance: in asset price modelling, Bayesian inference can take into account historical volatility and expert judgements about future market trends, offering a more balanced forecast that better deals with market anomalies.

Bayesian methods are also widely used in machine learning and artificial intelligence. They enable efficient handling of uncertainty and modelling of complex relationships in large data sets. This is crucial for creating adaptive and intelligent systems that can learn and evolve in dynamically changing environments.

Summary

Bayesian inference is an approach to probability that allows efficient inference based on available data and prior knowledge. It is used in fields ranging from medicine to finance and engineering. Understanding this method can be key to making better decisions, solving complex problems and realising the potential of data to its full potential. It is therefore worth exploring this topic and using Bayesian inference in data analysis.


Share on social media:


Accessibility settings
Line height
Letter spacing
No animations
Reading line
Speech
No images
Focus on content
Bigger cursor
Hotkeys