Discrepancy Statistics

What are Discrepancy Statistics in Machine Learning?

Discrepancy statistics quantitatively describe how closely a deep learning model conforms to real world observed data. These are usually expressed with some form of discrepancy function, where larger values show a poor fit of the model to data and zero indicating a perfect fit. In most cases, a given model’s parameter estimates are designed to ensure the lowest discrepancy function score as possible for the model.

In algebraic terms, these are continuous functions of the S elements, the sample covariance matrix, and reproduced estimate of S ( Σ(θ) ) calculated from the parameter estimates and the structural model.

Types of Discrepancy Functions:

There are several common discrepancy functions used in machine learning:

  • Maximum likelihood (ML)
  • Generalized least squares (GLS)
  • Ordinary least squares (OLS)
  • Normal Theory Weighted Least Squares (NTWLS)