What are Bayesian Statistics?
Bayesian Statistics are a technique that assigns “degrees of belief,” or Bayesian probabilities, to traditional statistical modeling. In this interpretation of statistics, probability is calculated as the reasonable expectation of an event occurring based upon currently known triggers. Or in other words, that probability is a dynamic process that can change as new information is gathered, rather than a fixed value based upon frequency or propensity.
How does Bayesian Statistics Work in Machine Learning?
- Statistical Inference
- Bayesian inference uses Bayesian probability to summarize evidence for the likelihood of a prediction.
- Statistical Modeling
- Bayesian statistics helps some models by classifying and specifying the prior distributions of any unknown parameters.
- Experiment Design – By including the concept of “prior belief influence,” this technique uses sequential analysis to factor in the outcome of earlier experiments when designing new ones. These “beliefs” are updated by prior and posterior distribution.
What are Bayesian Statistics Used for?
- While most machine learning models try to predict outcomes from large datasets, the Bayesian approach is helpful for several classes of problems that aren’t easily solved with other probability models. In particular:
- Databases with few data points for reference
- Models with strong prior intuitions from pre-existing observations
- Data with high levels of uncertainty, or when it’s necessary to quantify the level of uncertainty across an entire model or compare different models
When a model generates a null hypothesis but it’s necessary to claim something about the likelihood of the alternative hypothesis