The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models
A variety of learning objectives have been proposed for training latent variable generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, beta-VAE, adversarial autoencoders, AVB, AS-VAE and InfoVAE, are Lagrangian duals of the same primal optimization problem, corresponding to different settings of the Lagrange multipliers. The primal problem optimizes the mutual information between latent and visible variables, subject to the constraints of accurately modeling the data distribution and performing correct amortized inference. Based on this observation, we provide an exhaustive characterization of the statistical and computational trade-offs made by all the training objectives in this class of Lagrangian duals. Next, we propose a dual optimization method where we optimize model parameters as well as the Lagrange multipliers. This method achieves Pareto near-optimal solutions in terms of optimizing information and satisfying the consistency constraints.
READ FULL TEXT