What is the Central Limit Theorem?
The Central Limit Theorem states that the distribution of various independent observation means approaches a normal distribution model as the sample size gets larger, regardless of the population distribution’s statistical shape.
Moreover, the theory demonstrates that as the sample size increase, even across multiple unrelated datasets, so increases the accuracy of the population mean estimate.
For example, calculating the mean of a sample is only an estimate of the mean of the population distribution. So there will always be some margin of error. However, if you draw multiple independent samples and then graph all of their means, the distribution of those sample means will form a Normal, or Gaussian distribution.
On the same token, if you take the average of every standard deviation observation in your sample, then you’ll find the exact standard deviation for your entire population.
Central Limit Theorem Versus Law of Large Numbers
This theorem is often confused with the Law of Large numbers, which covers only the second rule where the larger your sample means are, the more accurate your total population mean will be.
Central Limit theory dives deeper and explores the statistical shape of observation distributions. In a general sense, this theorem is one of the first steps in helping a deep learning algorithm imitate the human concept of intuition.