What is a Credible Interval in Machine Learning?
The credible interval is the range wherein an unobserved parameter should fall in the posterior probability, given a certain prior probability threshold. In Bayesian models, this serves the same purpose as a confidence interval. However, the credible interval isn’t derived from the tested t-distribution or standard deviation, but rather bound by the assumptions in the prior probability distribution. So instead of a parameter with fixed values, the credible interval estimate is considered a random variable that lies within certain defined bounds (probability thresholds).
Credible Intervals versus Confidence Intervals in Practice:
For example, say a Frequentist probability approach could predict a Normal (Gaussian) distribution in the posterior data. This would yield a confidence interval that 95% of unobserved sample values will fall within a range of 2 standard deviations from the total sample’s mean.
A Bayesian model employing a prior probability approach, such as the principle of maximum entropy or empirical Bayes estimation, might imply that the distribution is not perfectly Gaussian (normal) and instead assigns a credible interval of 3 standard deviations to reach 95% confidence.