ControlVAE: Tuning, Analytical Properties, and Performance Analysis

10/31/2020
by   Huajie Shao, et al.
0

This paper reviews the novel concept of controllable variational autoencoder (ControlVAE), discusses its parameter tuning to meet application needs, derives its key analytic properties, and offers useful extensions and applications. ControlVAE is a new variational autoencoder (VAE) framework that combines the automatic control theory with the basic VAE to stabilize the KL-divergence of VAE models to a specified value. It leverages a non-linear PI controller, a variant of the proportional-integral-derivative (PID) control, to dynamically tune the weight of the KL-divergence term in the evidence lower bound (ELBO) using the output KL-divergence as feedback. This allows us to precisely control the KL-divergence to a desired value (set point), which is effective in avoiding posterior collapse and learning disentangled representations. In order to improve the ELBO over the regular VAE, we provide simplified theoretical analysis to inform setting the set point of KL-divergence for ControlVAE. We observe that compared to other methods that seek to balance the two terms in VAE's objective, ControlVAE leads to better learning dynamics. In particular, it can achieve a good trade-off between reconstruction quality and KL-divergence. We evaluate the proposed method on three tasks: image generation, language modeling and disentangled representation learning. The results show that ControlVAE can achieve much better reconstruction quality than the other methods for comparable disentanglement. On the language modeling task, ControlVAE can avoid posterior collapse (KL vanishing) and improve the diversity of generated text. Moreover, our method can change the optimization trajectory, improving the ELBO and the reconstruction quality for image generation.

READ FULL TEXT
research
04/13/2020

Controllable Variational Autoencoder

Variational Autoencoders (VAE) and their variants have been widely used ...
research
04/27/2020

A Batch Normalized Inference Network Keeps the KL Vanishing Away

Variational Autoencoder (VAE) is widely used as a generative model to ap...
research
09/15/2020

Challenging β-VAE with β< 1 for Disentanglement Via Dynamic Learning

This paper challenges the common assumption that the weight of β-VAE sho...
research
01/01/2023

eVAE: Evolutionary Variational Autoencoder

The surrogate loss of variational autoencoders (VAEs) poses various chal...
research
12/24/2020

Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder

The recently introduced introspective variational autoencoder (IntroVAE)...
research
09/13/2019

ρ-VAE: Autoregressive parametrization of the VAE encoder

We make a minimal, but very effective alteration to the VAE model. This ...
research
06/28/2022

AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE

Recently, introspective models like IntroVAE and S-IntroVAE have excelle...

Please sign up or login with your details

Forgot password? Click here to reset