Recent Advances in Autoencoder-Based Representation Learning

12/12/2018
by   Michael Tschannen, et al.
0

Learning useful representations with little or no supervision is a key challenge in artificial intelligence. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.

READ FULL TEXT

page 3

page 8

research
07/12/2021

Representation Learning for Out-Of-Distribution Generalization in Reinforcement Learning

Learning data representations that are useful for various downstream tas...
research
11/16/2022

Boosting Object Representation Learning via Motion and Object Continuity

Recent unsupervised multi-object detection models have shown impressive ...
research
09/15/2022

Gromov-Wasserstein Autoencoders

Learning concise data representations without supervisory signals is a f...
research
11/24/2019

dpVAEs: Fixing Sample Generation for Regularized VAEs

Unsupervised representation learning via generative modeling is a staple...
research
08/22/2023

A Survey on Self-Supervised Representation Learning

Learning meaningful representations is at the heart of many tasks in the...
research
08/23/2022

Neural PCA for Flow-Based Representation Learning

Of particular interest is to discover useful representations solely from...
research
11/15/2019

Neocortical plasticity: an unsupervised cake but no free lunch

The fields of artificial intelligence and neuroscience have a long histo...

Please sign up or login with your details

Forgot password? Click here to reset