Hierarchical Quantized Autoencoders

02/19/2020
by   Will Williams, et al.
23

Despite progress in training neural networks for lossy image compression, current approaches fail to maintain both perceptual quality and high-level features at very low bitrates. Encouraged by recent success in learning discrete representations with Vector Quantized Variational AutoEncoders (VQ-VAEs), we motivate the use of a hierarchy of VQ-VAEs to attain high factors of compression. We show that the combination of quantization and hierarchical latent structure aids likelihood-based image compression. This leads us to introduce a more probabilistic framing of the VQ-VAE, of which previous work is a limiting case. Our hierarchy produces a Markovian series of latent variables that reconstruct high-quality images which retain semantically meaningful features. These latents can then be further used to generate realistic samples. We provide qualitative and quantitative evaluations of reconstructions and samples on the CelebA and MNIST datasets.

READ FULL TEXT

page 2

page 6

page 11

page 12

page 13

research
05/27/2019

Quantization-Based Regularization for Autoencoders

Autoencoders and their variations provide unsupervised models for learni...
research
07/14/2020

Relaxed-Responsibility Hierarchical Discrete VAEs

Successfully training Variational Autoencoders (VAEs) with a hierarchy o...
research
02/20/2023

Analyzing the Posterior Collapse in Hierarchical Variational Autoencoders

Hierarchical Variational Autoencoders (VAEs) are among the most popular ...
research
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...
research
01/26/2023

Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models

Lossy image compression aims to represent images in as few bits as possi...
research
04/06/2018

Associative Compression Networks

This paper introduces Associative Compression Networks (ACNs), a new fra...
research
10/31/2021

PIE: Pseudo-Invertible Encoder

We consider the problem of information compression from high dimensional...

Please sign up or login with your details

Forgot password? Click here to reset