Learning Polynomial Transformations

04/08/2022
by   Sitan Chen, et al.
0

We consider the problem of learning high dimensional polynomial transformations of Gaussians. Given samples of the form p(x), where x∼ N(0, Id_r) is hidden and p: ℝ^r →ℝ^d is a function where every output coordinate is a low-degree polynomial, the goal is to learn the distribution over p(x). This problem is natural in its own right, but is also an important special case of learning deep generative models, namely pushforwards of Gaussians under two-layer neural networks with polynomial activations. Understanding the learnability of such generative models is crucial to understanding why they perform so well in practice. Our first main result is a polynomial-time algorithm for learning quadratic transformations of Gaussians in a smoothed setting. Our second main result is a polynomial-time algorithm for learning constant-degree polynomial transformations of Gaussian in a smoothed setting, when the rank of the associated tensors is small. In fact our results extend to any rotation-invariant input distribution, not just Gaussian. These are the first end-to-end guarantees for learning a pushforward under a neural network with more than one layer. Along the way, we also give the first polynomial-time algorithms with provable guarantees for tensor ring decomposition, a popular generalization of tensor decomposition that is used in practice to implicitly store large tensors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2021

Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear Circuits

We give new and efficient black-box reconstruction algorithms for some c...
research
11/08/2021

Efficiently Learning Any One Hidden Layer ReLU Network From Queries

Model extraction attacks have renewed interest in the classic problem of...
research
05/31/2022

Learning (Very) Simple Generative Models Is Hard

Motivated by the recent empirical successes of deep generative models, w...
research
03/21/2019

Learning Two layer Networks with Multinomial Activation and High Thresholds

Giving provable guarantees for learning neural networks is a core challe...
research
02/20/2018

On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

We establish connections between the problem of learning a two-layers ne...
research
04/20/2023

Learning Narrow One-Hidden-Layer ReLU Networks

We consider the well-studied problem of learning a linear combination of...
research
10/06/2020

Learning a mixture of two subspaces over finite fields

We study the problem of learning a mixture of two subspaces over 𝔽_2^n. ...

Please sign up or login with your details

Forgot password? Click here to reset