Learning Bijective Feature Maps for Linear ICA

02/18/2020
by   Alexander Camuto, et al.
19

Separating high-dimensional data like images into independent latent factors remains an open research problem. Here we develop a method that jointly learns a linear independent component analysis (ICA) model with non-linear bijective feature maps. By combining these two methods, ICA can learn interpretable latent structure for images. For non-square ICA, where we assume the number of sources is less than the dimensionality of data, we achieve better unsupervised latent factor discovery than flow-based models and linear ICA. This performance scales to large image datasets such as CelebA

READ FULL TEXT

page 5

page 7

page 15

page 16

research
05/26/2022

Spatio-temporally separable non-linear latent factor learning: an application to somatomotor cortex fMRI data

Functional magnetic resonance imaging (fMRI) data contain complex spatio...
research
04/19/2019

Causal Discovery with General Non-Linear Relationships Using Non-Linear ICA

We consider the problem of inferring causal relationships between two or...
research
05/30/2018

On the Spectrum of Random Features Maps of High Dimensional Data

Random feature maps are ubiquitous in modern statistical machine learnin...
research
06/01/2011

Sparse Non Gaussian Component Analysis by Semidefinite Programming

Sparse non-Gaussian component analysis (SNGCA) is an unsupervised method...
research
06/06/2013

Diffusion map for clustering fMRI spatial maps extracted by independent component analysis

Functional magnetic resonance imaging (fMRI) produces data about activit...
research
10/13/2022

On the Identifiability and Estimation of Causal Location-Scale Noise Models

We study the class of location-scale or heteroscedastic noise models (LS...
research
12/16/2015

Streaming Kernel Principal Component Analysis

Kernel principal component analysis (KPCA) provides a concise set of bas...

Please sign up or login with your details

Forgot password? Click here to reset