Bayes' Theorem and the Bayesian Framework
Bayesian inference is a framework for updating beliefs in light of new evidence. It is grounded in Bayes' theorem: P(θ|data) ∝ P(data|θ) × P(θ), where θ represents the parameter of interest (e.g., a population mean or treatment effect). P(θ): the prior distribution — our beliefs about θ before seeing the data. This can be informative (based on previous studies, expert knowledge) or uninformative/weakly informative (representing minimal prior knowledge). P(data|θ): the likelihood function — how probable the observed data are for different values of θ. This is the same function used in maximum likelihood estimation. P(θ|data): the posterior distribution — our updated beliefs about θ after seeing the data. The posterior combines prior beliefs with the evidence from the data. As n increases, the data dominate the prior, and Bayesian and frequentist results converge. The posterior distribution provides a complete probability statement about θ: we can say 'there is a 95% probability that μ is between 45.2 and 52.8' — the Bayesian credible interval. This is the statement people intuitively want to make but cannot make with a frequentist confidence interval. Bayesian hypothesis testing: instead of a binary reject/fail-to-reject decision, we compute the Bayes Factor (BF₁₀ = P(data|H₁) / P(data|H₀)) — a continuous measure of relative evidence for the two hypotheses.