PIE: Pseudo-Invertible Encoder

10/31/2021
by   Jan Jetze Beitler, et al.
0

We consider the problem of information compression from high dimensional data. Where many studies consider the problem of compression by non-invertible transformations, we emphasize the importance of invertible compression. We introduce new class of likelihood-based autoencoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders. We provide the theoretical explanation of their principles. We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperforms WAE and VAE in sharpness of the generated images.

READ FULL TEXT

page 8

page 11

research
04/25/2014

On Quadratization of Pseudo-Boolean Functions

We survey current term-wise techniques for quadratizing high-degree pseu...
research
03/31/2023

Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders

In this paper, we consider the problem of improving the inference latenc...
research
04/22/2019

Non-local Attention Optimized Deep Image Compression

This paper proposes a novel Non-Local Attention Optimized Deep Image Com...
research
06/11/2020

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...
research
02/19/2020

Hierarchical Quantized Autoencoders

Despite progress in training neural networks for lossy image compression...
research
03/01/2017

Lossy Image Compression with Compressive Autoencoders

We propose a new approach to the problem of optimizing autoencoders for ...

Please sign up or login with your details

Forgot password? Click here to reset