Auto-regressive Image Synthesis with Integrated Quantization

by   Fangneng Zhan, et al.
Max Planck Society
Nanyang Technological University

Deep generative models have achieved conspicuous progress in realistic image synthesis with multifarious conditional inputs, while generating diverse yet high-fidelity images remains a grand challenge in conditional image generation. This paper presents a versatile framework for conditional image generation which incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression that naturally leads to diverse image generation. Instead of independently quantizing the features of multiple domains as in prior research, we design an integrated quantization scheme with a variational regularizer that mingles the feature discretization in multiple domains, and markedly boosts the auto-regressive modeling performance. Notably, the variational regularizer enables to regularize feature distributions in incomparable latent spaces by penalizing the intra-domain variations of distributions. In addition, we design a Gumbel sampling strategy that allows to incorporate distribution uncertainty into the auto-regressive training procedure. The Gumbel sampling substantially mitigates the exposure bias that often incurs misalignment between the training and inference stages and severely impairs the inference performance. Extensive experiments over multiple conditional image generation tasks show that our method achieves superior diverse image generation performance qualitatively and quantitatively as compared with the state-of-the-art.


page 10

page 11


Semi-supervised FusedGAN for Conditional Image Generation

We present FusedGAN, a deep network for conditional image synthesis with...

A Variational U-Net for Conditional Appearance and Shape Generation

Deep generative models have demonstrated great performance in image synt...

Learning Versatile 3D Shape Generation with Improved AR Models

Auto-Regressive (AR) models have achieved impressive results in 2D image...

CDVAE: Co-embedding Deep Variational Auto Encoder for Conditional Variational Generation

Problems such as predicting a new shading field (Y) for an image (X) are...

Professor Forcing: A New Algorithm for Training Recurrent Networks

The Teacher Forcing algorithm trains recurrent networks by supplying obs...

BIGRoC: Boosting Image Generation via a Robust Classifier

The interest of the machine learning community in image synthesis has gr...

DRAW: A Recurrent Neural Network For Image Generation

This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural ...

Please sign up or login with your details

Forgot password? Click here to reset