Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation

12/03/2021
by   Minghui Hu, et al.
2

The integration of Vector Quantised Variational AutoEncoder (VQ-VAE) with autoregressive models as generation part has yielded high-quality results on image generation. However, the autoregressive models will strictly follow the progressive scanning order during the sampling phase. This leads the existing VQ series models to hardly escape the trap of lacking global information. Denoising Diffusion Probabilistic Models (DDPM) in the continuous domain have shown a capability to capture the global context, while generating high-quality images. In the discrete state space, some works have demonstrated the potential to perform text generation and low resolution image generation. We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context, which compensates for the deficiency of the classical autoregressive model along pixel space. Meanwhile, the integration of the discrete VAE with the diffusion model resolves the drawback of conventional autoregressive models being oversized, and the diffusion model which demands excessive time in the sampling process when generating images. It is found that the quality of the generated images is heavily dependent on the discrete visual codebook. Extensive experiments demonstrate that the proposed Vector Quantised Discrete Diffusion Model (VQ-DDM) is able to achieve comparable performance to top-tier methods with low complexity. It also demonstrates outstanding advantages over other vectors quantised with autoregressive models in terms of image inpainting tasks without additional training.

READ FULL TEXT

page 3

page 7

page 8

page 12

page 13

page 14

page 15

research
04/25/2023

RenderDiffusion: Text Generation as Image Generation

Diffusion models have become a new generative paradigm for text generati...
research
08/19/2021

ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

Autoregressive models and their sequential factorization of the data lik...
research
03/30/2023

DiffCollage: Parallel Generation of Large Content with Diffusion Models

We present DiffCollage, a compositional diffusion model that can generat...
research
07/20/2020

Incorporating Reinforced Adversarial Learning in Autoregressive Image Generation

Autoregressive models recently achieved comparable results versus state-...
research
06/22/2020

generating annotated high-fidelity images containing multiple coherent objects

Recent developments related to generative models have made it possible t...
research
01/15/2023

T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations

In this work, we investigate a simple and must-known conditional generat...
research
12/03/2017

Spatial PixelCNN: Generating Images from Patches

In this paper we propose Spatial PixelCNN, a conditional autoregressive ...

Please sign up or login with your details

Forgot password? Click here to reset