Towards efficient representation identification in supervised learning

04/10/2022
by   Kartik Ahuja, et al.
0

Humans have a remarkable ability to disentangle complex sensory inputs (e.g., image, text) into simple factors of variation (e.g., shape, color) without much supervision. This ability has inspired many works that attempt to solve the following question: how do we invert the data generation process to extract those factors with minimal or no supervision? Several works in the literature on non-linear independent component analysis have established this negative result; without some knowledge of the data generation process or appropriate inductive biases, it is impossible to perform this inversion. In recent years, a lot of progress has been made on disentanglement under structural assumptions, e.g., when we have access to auxiliary information that makes the factors of variation conditionally independent. However, existing work requires a lot of auxiliary information, e.g., in supervised classification, it prescribes that the number of label classes should be at least equal to the total dimension of all factors of variation. In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation? b) Can we reduce the amount of auxiliary information required for disentanglement? For a class of models where auxiliary information does not ensure conditional independence, we show theoretically and experimentally that disentanglement (to a large extent) is possible even when the auxiliary information dimension is much less than the dimension of the true latent representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2021

Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning

A key goal of unsupervised representation learning is "inverting" a data...
research
06/29/2021

An Image is Worth More Than a Thousand Words: Towards Disentanglement in the Wild

Unsupervised disentanglement has been shown to be theoretically impossib...
research
05/03/2019

Disentangling Factors of Variation Using Few Labels

Learning disentangled representations is considered a cornerstone proble...
research
12/15/2016

A Survey of Inductive Biases for Factorial Representation-Learning

With the resurgence of interest in neural networks, representation learn...
research
03/14/2020

Semi-supervised Disentanglement with Independent Vector Variational Autoencoders

We aim to separate the generative factors of data into two latent vector...
research
06/20/2022

Identifiability of deep generative models under mixture priors without auxiliary information

We prove identifiability of a broad class of deep latent variable models...

Please sign up or login with your details

Forgot password? Click here to reset