Learning Representations by Maximizing Compression

08/04/2011
by   Karol Gregor, et al.
0

We give an algorithm that learns a representation of data through compression. The algorithm 1) predicts bits sequentially from those previously seen and 2) has a structure and a number of computations similar to an autoencoder. The likelihood under the model can be calculated exactly, and arithmetic coding can be used directly for compression. When training on digits the algorithm learns filters similar to those of restricted boltzman machines and denoising autoencoders. Independent samples can be drawn from the model by a single sweep through the pixels. The algorithm has a good compression performance when compared to other methods that work under random ordering of pixels.

READ FULL TEXT

page 5

page 6

research
10/02/2020

Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding

Variational Autoencoders (VAEs) have seen widespread use in learned imag...
research
05/04/2014

Rule of Three for Superresolution of Still Images with Applications to Compression and Denoising

We describe a new method for superresolution of still images (in the wav...
research
10/26/2017

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to i...
research
02/18/2020

Variable-Bitrate Neural Compression via Bayesian Arithmetic Coding

Deep Bayesian latent variable models have enabled new approaches to both...
research
11/18/2022

CNeRV: Content-adaptive Neural Representation for Visual Data

Compression and reconstruction of visual data have been widely studied i...
research
04/06/2018

Associative Compression Networks

This paper introduces Associative Compression Networks (ACNs), a new fra...
research
04/18/2022

Neural Space-filling Curves

We present Neural Space-filling Curves (SFCs), a data-driven approach to...

Please sign up or login with your details

Forgot password? Click here to reset