A Survey of Inductive Biases for Factorial Representation-Learning

12/15/2016
by   Karl Ridgeway, et al.
0

With the resurgence of interest in neural networks, representation learning has re-emerged as a central focus in artificial intelligence. Representation learning refers to the discovery of useful encodings of data that make domain-relevant information explicit. Factorial representations identify underlying independent causal factors of variation in data. A factorial representation is compact and faithful, makes the causal factors explicit, and facilitates human interpretation of data. Factorial representations support a variety of applications, including the generation of novel examples, indexing and search, novelty detection, and transfer learning. This article surveys various constraints that encourage a learning algorithm to discover factorial representations. I dichotomize the constraints in terms of unsupervised and supervised inductive bias. Unsupervised inductive biases exploit assumptions about the environment, such as the statistical distribution of factor coefficients, assumptions about the perturbations a factor should be invariant to (e.g. a representation of an object can be invariant to rotation, translation or scaling), and assumptions about how factors are combined to synthesize an observation. Supervised inductive biases are constraints on the representations based on additional information connected to observations. Supervisory labels come in variety of types, which vary in how strongly they constrain the representation, how many factors are labeled, how many observations are labeled, and whether or not we know the associations between the constraints and the factors they are related to. This survey brings together a wide variety of models that all touch on the problem of learning factorial representations and lays out a framework for comparing these models based on the strengths of the underlying supervised and unsupervised inductive biases.

READ FULL TEXT

page 10

page 11

page 13

page 14

page 17

page 19

page 28

page 29

research
03/11/2022

ZIN: When and How to Learn Invariance by Environment Inference?

It is commonplace to encounter heterogeneous data, of which some aspects...
research
05/03/2019

Disentangling Factors of Variation Using Few Labels

Learning disentangled representations is considered a cornerstone proble...
research
07/17/2021

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

An important component for generalization in machine learning is to unco...
research
05/16/2023

ProtoVAE: Prototypical Networks for Unsupervised Disentanglement

Generative modeling and self-supervised learning have in recent years ma...
research
04/10/2022

Towards efficient representation identification in supervised learning

Humans have a remarkable ability to disentangle complex sensory inputs (...
research
03/29/2022

Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets

Pooling multiple neuroimaging datasets across institutions often enables...
research
12/17/2014

Learning unbiased features

A key element in transfer learning is representation learning; if repres...

Please sign up or login with your details

Forgot password? Click here to reset