Improving Inference for Neural Image Compression

06/07/2020
by   Yibo Yang, et al.
19

We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compression, we identify three approximation gaps which limit performance in the conventional approach: (i) an amortization gap, (ii) a discretization gap, and (iii) a marginalization gap. We propose improvements to each of these three shortcomings based on iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits-back coding to lossy compression. In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method.

READ FULL TEXT

page 8

page 13

research
04/05/2022

Split Hierarchical Variational Compression

Variational autoencoders (VAEs) have witnessed great success in performi...
research
01/15/2019

Practical Lossless Compression with Latent Variables using Bits Back Coding

Deep latent variable models have seen recent success in many data domain...
research
05/16/2019

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

The bits-back argument suggests that latent variable models can be turne...
research
02/22/2021

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

Latent variable models have been successfully applied in lossless compre...
research
10/02/2020

Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding

Variational Autoencoders (VAEs) have seen widespread use in learned imag...
research
12/20/2019

HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

We make the following striking observation: fully convolutional VAE mode...
research
05/23/2022

Generalization Gap in Amortized Inference

The ability of likelihood-based probabilistic models to generalize to un...

Please sign up or login with your details

Forgot password? Click here to reset