Discovering Hidden Factors of Variation in Deep Networks

12/20/2014
by   Brian Cheung, et al.
0

Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification. But there has been less exploration in learning the factors of variation apart from the classification signal. By augmenting autoencoders with simple regularization terms during training, we demonstrate that standard deep architectures can discover and explicitly represent factors of variation beyond those relevant for categorization. We introduce a cross-covariance penalty (XCov) as a method to disentangle factors like handwriting style for digits and subject identity in faces. We demonstrate this on the MNIST handwritten digit database, the Toronto Faces Database (TFD) and the Multi-PIE dataset by generating manipulated instances of the data. Furthermore, we demonstrate these deep networks can extrapolate `hidden' variation in the supervised signal.

READ FULL TEXT

page 4

page 6

page 7

page 8

research
08/26/2021

Disentangling ODE parameters from dynamics in VAEs

Deep networks have become increasingly of interest in dynamical system p...
research
03/08/2018

Some Approximation Bounds for Deep Networks

In this paper we introduce new bounds on the approximation of functions ...
research
06/13/2019

CoopSubNet: Cooperating Subnetwork for Data-Driven Regularization of Deep Networks under Limited Training Budgets

Deep networks are an integral part of the current machine learning parad...
research
11/10/2016

Disentangling factors of variation in deep representations using adversarial training

We introduce a conditional generative model for learning to disentangle ...
research
11/24/2017

JADE: Joint Autoencoders for Dis-Entanglement

The problem of feature disentanglement has been explored in the literatu...
research
03/04/2021

There and back again: Cycle consistency across sets for isolating factors of variation

Representational learning hinges on the task of unraveling the set of un...
research
05/23/2019

Hangul Fonts Dataset: a Hierarchical and Compositional Dataset for Interrogating Learned Representations

Interpretable representations of data are useful for testing a hypothesi...

Please sign up or login with your details

Forgot password? Click here to reset