The Gaussian equivalence of generative models for learning with two-layer neural networks

06/25/2020 ∙ by Sebastian Goldt, et al. ∙ 87

Understanding the impact of data structure on learning in neural networks remains a key challenge for the theory of neural networks. Many theoretical works on neural networks do not explicitly model training data, or assume that inputs are drawn independently from some factorised probability distribution. Here, we go beyond the simple i.i.d. modelling paradigm by studying neural networks trained on data drawn from structured generative models. We make three contributions: First, we establish rigorous conditions under which a class of generative models shares key statistical properties with an appropriately chosen Gaussian feature model. Second, we use this Gaussian equivalence theorem (GET) to derive a closed set of equations that describe the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent on data drawn from a large class of generators. We complement our theoretical results by experiments demonstrating how our theory applies to deep, pre-trained generative models.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

Code Repositories

gaussian-equiv-2layer

Code and resources for "The Gaussian equivalence of generative models for learning with two-layer neural networks" [https://arxiv.org/abs/2006.14709]


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.