Standard error

What is a Standard Error?

Standard error is the measurement of how dispersed a sample’s means are from the population mean. In the vast majority of cases, standard error is defined as the standard deviation divided by the square root of the sample size.

One exception is in regression analysis, where standard error can refer to both the square root of the reduced chi-squared statistic and the standard error for a regression coefficient, such as confidence intervals.


How do you find the Standard Error?

  1. Calculate the mean (total of all samples divided by the number of samples) and each measurement's deviation from the mean.

  2. Remove negatives by squaring each deviation from the mean.

  3. Sum the squared deviations and divide the total by n-1 (one less than the total number of measurements). This provides the standard deviation (SD).

  4. Divide the standard deviation by the square root of the sample size (n). The result is the standard error (SE).

  5. To plot a single standard error as plus/minus ranges (mean ±1 SE), subtract the standard error from the mean for the lower bound. Then add the standard error to the mean for the upper end.

Other Basic Statistics Used in Machine Learning:

Please sign up or login with your details

Forgot password? Click here to reset