DeepAI AI Chat
Log In Sign Up

Exact Rate-Distortion in Autoencoders via Echo Noise

04/15/2019
by   Rob Brekelmans, et al.
36

Compression is at the heart of effective representation learning. However, lossy compression is typically achieved through simple parametric models like Gaussian noise to preserve analytic tractability, and the limitations this imposes on learning are largely unexplored. Further, the Gaussian prior assumptions in models such as variational autoencoders (VAEs) provide only an upper bound on the compression rate in general. We introduce a new noise channel, Echo noise, that admits a simple, exact expression for mutual information for arbitrary input distributions. The noise is constructed in a data-driven fashion that does not require restrictive distributional assumptions. With its complex encoding mechanism and exact rate regularization, Echo leads to improved bounds on log-likelihood and dominates β-VAEs across the achievable range of rate-distortion trade-offs. Further, we show that Echo noise can outperform state-of-the-art flow methods without the need to train complex distributional transformations

READ FULL TEXT
06/21/2022

Supermodular f-divergences and bounds on lossy compression and generalization error with mutual f-information

In this paper, we introduce super-modular -divergences and provide three...
11/23/2021

Towards Empirical Sandwich Bounds on the Rate-Distortion Function

Rate-distortion (R-D) function, a key quantity in information theory, ch...
04/12/2019

Information Theoretic Lower Bounds on Negative Log Likelihood

In this article we use rate-distortion theory, a branch of information t...
05/19/2022

Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

In this work, we provide an exact likelihood alternative to the variatio...
05/22/2020

On compression rate of quantum autoencoders: Control design, numerical and experimental realization

Quantum autoencoders which aim at compressing quantum information in a l...
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...
08/22/2019

Noise Flow: Noise Modeling with Conditional Normalizing Flows

Modeling and synthesizing image noise is an important aspect in many com...

Code Repositories

echo

Echo Noise Channel for Exact Mutual Information Calculation


view repo

invariance-tutorial

A tutorial on learned non-adversarial invariance in neural networks


view repo