Exact Rate-Distortion in Autoencoders via Echo Noise

04/15/2019
by   Rob Brekelmans, et al.
36

Compression is at the heart of effective representation learning. However, lossy compression is typically achieved through simple parametric models like Gaussian noise to preserve analytic tractability, and the limitations this imposes on learning are largely unexplored. Further, the Gaussian prior assumptions in models such as variational autoencoders (VAEs) provide only an upper bound on the compression rate in general. We introduce a new noise channel, Echo noise, that admits a simple, exact expression for mutual information for arbitrary input distributions. The noise is constructed in a data-driven fashion that does not require restrictive distributional assumptions. With its complex encoding mechanism and exact rate regularization, Echo leads to improved bounds on log-likelihood and dominates β-VAEs across the achievable range of rate-distortion trade-offs. Further, we show that Echo noise can outperform state-of-the-art flow methods without the need to train complex distributional transformations

READ FULL TEXT
research
06/21/2022

Supermodular f-divergences and bounds on lossy compression and generalization error with mutual f-information

In this paper, we introduce super-modular -divergences and provide three...
research
11/23/2021

Towards Empirical Sandwich Bounds on the Rate-Distortion Function

Rate-distortion (R-D) function, a key quantity in information theory, ch...
research
02/01/2018

Variational image compression with a scale hyperprior

We describe an end-to-end trainable model for image compression based on...
research
04/12/2019

Information Theoretic Lower Bounds on Negative Log Likelihood

In this article we use rate-distortion theory, a branch of information t...
research
05/22/2020

On compression rate of quantum autoencoders: Control design, numerical and experimental realization

Quantum autoencoders which aim at compressing quantum information in a l...
research
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...
research
08/22/2019

Noise Flow: Noise Modeling with Conditional Normalizing Flows

Modeling and synthesizing image noise is an important aspect in many com...

Please sign up or login with your details

Forgot password? Click here to reset