The committee machine: Computational to statistical gaps in learning a two-layers neural network

06/14/2018
by   Benjamin Aubin, et al.
6

Heuristic tools from statistical physics have been used in the past to locate the phase transitions and compute the optimal learning and generalization errors in the teacher-student scenario in multi-layer neural networks. In this contribution, we provide a rigorous justification of these approaches for a two-layers neural network model called the committee machine. We also introduce a version of the approximate message passing (AMP) algorithm for the committee machine that allows to perform optimal learning in polynomial time for a large set of parameters. We find that there are regimes in which a low generalization error is information-theoretically achievable while the AMP algorithm fails to deliver it, strongly suggesting that no efficient algorithm exists for those cases, and unveiling a large computational gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2018

Notes on computational-to-statistical gaps: predictions using statistical physics

In these notes we describe heuristics to predict computational-to-statis...
research
12/06/2018

Rank-one matrix estimation: analysis of algorithmic and information theoretic limits by the spatial coupling method

Factorizing low-rank matrices is a problem with many applications in mac...
research
03/01/2015

Phase Transitions in Sparse PCA

We study optimal estimation for sparse principal component analysis when...
research
08/10/2017

Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models

We consider generalized linear models (GLMs) where an unknown n-dimensio...
research
01/24/2017

Multi-Layer Generalized Linear Estimation

We consider the problem of reconstructing a signal from multi-layered (p...
research
07/11/2023

Fundamental limits of overparametrized shallow neural networks for supervised learning

We carry out an information-theoretical analysis of a two-layer neural n...
research
05/10/2023

Phase transitions in the mini-batch size for sparse and dense neural networks

The use of mini-batches of data in training artificial neural networks i...

Please sign up or login with your details

Forgot password? Click here to reset