A moment-matching metric for latent variable generative models
It can be difficult to assess the quality of a fitted model when facing unsupervised learning problems. Latent variable models, such as variation autoencoders and Gaussian mixture models, are often trained with likelihood-based approaches. In scope of Goodhart's law, when a metric becomes a target it ceases to be a good metric and therefore we should not use likelihood to assess the quality of the fit of these models. The solution we propose is a new metric for model comparison or regularization that relies on moments. The concept is to study the difference between the data moments and the model moments using a matrix norm, such as the Frobenius norm. We show how to use this new metric for model comparison and then for regularization. It is common to draw samples from the fitted distribution when evaluating latent variable models and we show that our proposed metric is faster to compute and has a smaller variance that this alternative. We conclude this article with a proof of concept of both applications and we discuss future work.
READ FULL TEXT