DeepAI

# Principle of Indifference

## What is the Principle of Indifference?

The principle of indifference is a rule for assigning possible outcomes equal probability when no other information is available. If there’s no data suggesting one possibility is more likely than another, then potential outcomes are reduced to the fewest logical, mutually exclusive choices possible, with the remaining general outcomes simply divided by one (1/n) to determine the probability of occurrence.

This principle of indifference isn’t used in Frequentist probability, since it expresses a degree of belief rather than frequency. However, in Bayesian probability, this approach is sometimes employed as a non-informative prior (objective reasoning) when there’s no other available data and disagreement over what subjective prior assumptions to make.

### Problems with the Principle of Indifference

There are two major limitations for the indifference rule that make it less useful as a model to accurately predict outcomes in many situations.

First, indifference only accounts for broad, mutually exclusive outcomes, rather than the methods used for calculating the end results. This “partition problem” is especially troublesome with multivariate analysis, since ignoring the effects of one random variable on the others often leads to improbable or even impossible outcomes.

Second, even when dealing with truly random and independent binomial variables, the indifference principle’s assumption of symmetry simplifies the odds so much that it’s often not a practical predictor of results and can even generate conflicting probabilities. For example, if choosing the odds of rolling “6” on a six-sided dice, all Frequentist and most Bayesian prior probabilities would assume six possible outcomes, with 16.66% odds of rolling a “6.”

However, a prior probability using the principle of indifference would simplify the problem into just two outcomes: rolling a “6” or not, which puts the odds at a far less accurate predication rate of 50%.