On the Convergence Rate of Gaussianization with Random Rotations

06/23/2023
by   Felix Draxler, et al.
0

Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input p(x), but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.

READ FULL TEXT

page 2

page 9

page 19

research
06/17/2023

Linearly-scalable learning of smooth low-dimensional patterns with permutation-aided entropic dimension reduction

In many data science applications, the objective is to extract appropria...
research
05/24/2022

Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable

Recently, neural networks have been shown to perform exceptionally well ...
research
02/07/2022

Rates of convergence for nonparametric estimation of singular distributions using generative adversarial networks

We consider generative adversarial networks (GAN) for estimating paramet...
research
03/15/2013

Subspace Clustering via Thresholding and Spectral Clustering

We consider the problem of clustering a set of high-dimensional data poi...
research
06/27/2023

xAI-CycleGAN, a Cycle-Consistent Generative Assistive Network

In the domain of unsupervised image-to-image transformation using genera...
research
10/16/2015

Change Detection in Multivariate Datastreams: Likelihood and Detectability Loss

We address the problem of detecting changes in multivariate datastreams,...

Please sign up or login with your details

Forgot password? Click here to reset