Optimal Learning of Deep Random Networks of Extensive-width

02/01/2023
by   Hugo Cui, et al.
0

We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large. We derive a closed-form expression for the Bayes-optimal test error, for regression and classification tasks. We contrast these Bayes-optimal errors with the test errors of ridge regression, kernel and random features regression. We find, in particular, that optimally regularized ridge regression, as well as kernel regression, achieve Bayes-optimal performances, while the logistic loss yields a near-optimal test error for classification. We further show numerically that when the number of samples grows faster than the dimension, ridge and kernel methods become suboptimal, while neural networks achieve test error close to zero from quadratically many samples.

READ FULL TEXT

page 17

page 18

research
06/11/2020

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization

We consider a commonly studied supervised classification of a synthetic ...
research
02/23/2021

Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed

A recent series of theoretical works showed that the dynamics of neural ...
research
02/01/2023

Deterministic equivalent and error universality of deep random features learning

This manuscript considers the problem of learning a random Gaussian netw...
research
02/26/2020

The role of regularization in classification of high-dimensional noisy Gaussian mixture

We consider a high-dimensional mixture of two Gaussians in the noisy reg...
research
04/27/2019

Linearized two-layers neural networks in high dimension

We consider the problem of learning an unknown function f_ on the d-dime...
research
06/22/2023

An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression

We study the cost of overfitting in noisy kernel ridge regression (KRR),...
research
05/30/2022

Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression

As modern machine learning models continue to advance the computational ...

Please sign up or login with your details

Forgot password? Click here to reset