On Nonparametric Guidance for Learning Autoencoder Representations

02/08/2011
by   Jasper Snoek, et al.
0

Unsupervised discovery of latent representations, in addition to being useful for density modeling, visualisation and exploratory data analysis, is also increasingly important for learning features relevant to discriminative tasks. Autoencoders, in particular, have proven to be an effective way to learn latent codes that reflect meaningful variations in data. A continuing challenge, however, is guiding an autoencoder toward representations that are useful for particular tasks. A complementary challenge is to find codes that are invariant to irrelevant transformations of the data. The most common way of introducing such problem-specific guidance in autoencoders has been through the incorporation of a parametric component that ties the latent representation to the label information. In this work, we argue that a preferable approach relies instead on a nonparametric guidance mechanism. Conceptually, it ensures that there exists a function that can predict the label information, without explicitly instantiating that function. The superiority of this guidance mechanism is confirmed on two datasets. In particular, this approach is able to incorporate invariance information (lighting, elevation, etc.) from the small NORB object recognition dataset and yields state-of-the-art performance for a single layer, non-convolutional network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2020

Fairness by Learning Orthogonal Disentangled Representations

Learning discriminative powerful representations is a crucial step for m...
research
07/19/2018

Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer

Autoencoders provide a powerful framework for learning compressed repres...
research
04/07/2023

Domain Generalization In Robust Invariant Representation

Unsupervised approaches for learning representations invariant to common...
research
11/06/2018

Sets of autoencoders with shared latent spaces

Autoencoders receive latent models of input data. It was shown in recent...
research
05/04/2017

KATE: K-Competitive Autoencoder for Text

Autoencoders have been successful in learning meaningful representations...
research
11/15/2015

Learning Representations of Affect from Speech

There has been a lot of prior work on representation learning for speech...
research
11/30/2017

Feature discovery and visualization of robot mission data using convolutional autoencoders and Bayesian nonparametric topic models

The gap between our ability to collect interesting data and our ability ...

Please sign up or login with your details

Forgot password? Click here to reset