Associative Compression Networks for Representation Learning

04/06/2018
by   Alex Graves, et al.
0

This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders (VAEs) in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the dataset using an ordering determined by proximity in latent space. Since the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes. Crucially, the codes remain informative when powerful, autoregressive decoders are used, which we argue is fundamentally difficult with normal VAEs. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs discover high-level latent features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples. We conclude that ACNs are a promising new direction for representation learning: one that steps away from IID modelling, and towards learning a structured description of the dataset as a whole.

READ FULL TEXT

page 6

page 7

page 8

page 11

page 12

page 13

research
04/06/2018

Associative Compression Networks

This paper introduces Associative Compression Networks (ACNs), a new fra...
research
03/21/2017

Nonparametric Variational Auto-encoders for Hierarchical Representation Learning

The recently developed variational autoencoders (VAEs) have proved to be...
research
10/16/2018

The LORACs prior for VAEs: Letting the Trees Speak for the Data

In variational autoencoders, the prior on the latent codes z is often tr...
research
11/08/2021

Tensor-based Subspace Factorization for StyleGAN

In this paper, we propose τGAN a tensor-based method for modeling the la...
research
01/27/2019

Disentangling in Variational Autoencoders with Natural Clustering

Learning representations that disentangle the underlying factors of vari...
research
05/30/2023

Improving Deep Representation Learning via Auxiliary Learnable Target Coding

Deep representation learning is a subfield of machine learning that focu...
research
06/07/2017

InfoVAE: Information Maximizing Variational Autoencoders

It has been previously observed that variational autoencoders tend to ig...

Please sign up or login with your details

Forgot password? Click here to reset