On the failure of variational score matching for VAE models

10/24/2022
by   Li Kevin Wenliang, et al.
0

Score matching (SM) is a convenient method for training flexible probabilistic models, which is often preferred over the traditional maximum-likelihood (ML) approach. However, these models are less interpretable than normalized models; as such, training robustness is in general difficult to assess. We present a critical study of existing variational SM objectives, showing catastrophic failure on a wide range of datasets and network architectures. Our theoretical insights on the objectives emerge directly from their equivalent autoencoding losses when optimizing variational autoencoder (VAE) models. First, we show that in the Fisher autoencoder, SM produces far worse models than maximum-likelihood, and approximate inference by Fisher divergence can lead to low-density local optima. However, with important modifications, this objective reduces to a regularized autoencoding loss that resembles the evidence lower bound (ELBO). This analysis predicts that the modified SM algorithm should behave very similarly to ELBO on Gaussian VAEs. We then review two other FD-based objectives from the literature and show that they reduce to uninterpretable autoencoding losses, likely leading to poor performance. The experiments verify our theoretical predictions and suggest that only ELBO and the baseline objective robustly produce expected results, while previously proposed SM methods do not.

READ FULL TEXT

page 22

page 23

page 24

page 25

page 26

page 27

page 28

page 29

research
03/26/2020

A lower bound for the ELBO of the Bernoulli Variational Autoencoder

We consider a variational autoencoder (VAE) for binary data. Our main in...
research
03/07/2020

The Variational InfoMax Learning Objective

Bayesian Inference and Information Bottleneck are the two most popular o...
research
09/06/2017

Symmetric Variational Autoencoder and Connections to Adversarial Learning

A new form of the variational autoencoder (VAE) is proposed, based on th...
research
01/06/2021

Cauchy-Schwarz Regularized Autoencoder

Recent work in unsupervised learning has focused on efficient inference ...
research
11/30/2021

Exponentially Tilted Gaussian Prior for Variational Autoencoder

An important propertyfor deep neural networks to possess is the ability ...
research
05/25/2017

Filtering Variational Objectives

When used as a surrogate objective for maximum likelihood estimation in ...
research
02/18/2020

A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models

Score matching provides an effective approach to learning flexible unnor...

Please sign up or login with your details

Forgot password? Click here to reset