VAE Approximation Error: ELBO and Conditional Independence

02/18/2021
by   Dmitrij Schlesinger, et al.
0

The importance of Variational Autoencoders reaches far beyond standalone generative models – the approach is also used for learning latent representations and can be generalized to semi-supervised learning. This requires a thorough analysis of their commonly known shortcomings: posterior collapse and approximation errors. This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family, in particular under conditional independence assumptions. We identify the subclass of generative models consistent with the encoder family. We show that the ELBO optimizer is pulled from the likelihood optimizer towards this consistent subset. Furthermore, this subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset