Generating Images with Sparse Representations

03/05/2021
by   Charlie Nash, et al.
10

The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

page 15

page 17

page 18

research
03/17/2022

Image Super-Resolution With Deep Variational Autoencoders

Image super-resolution (SR) techniques are used to generate a high-resol...
research
03/03/2022

Autoregressive Image Generation using Residual Quantization

For autoregressive (AR) modeling of high-resolution images, vector quant...
research
05/12/2023

PanFlowNet: A Flow-Based Deep Network for Pan-sharpening

Pan-sharpening aims to generate a high-resolution multispectral (HRMS) i...
research
05/13/2021

High-Resolution Complex Scene Synthesis with Transformers

The use of coarse-grained layouts for controllable synthesis of complex ...
research
02/15/2018

Image Transformer

Image generation has been successfully cast as an autoregressive sequenc...
research
07/08/2020

NVAE: A Deep Hierarchical Variational Autoencoder

Normalizing flows, autoregressive models, variational autoencoders (VAEs...
research
11/20/2020

Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

We present a hierarchical VAE that, for the first time, outperforms the ...

Please sign up or login with your details

Forgot password? Click here to reset