Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks

02/28/2021
by   Ryumei Nakada, et al.
0

We investigate the asymptotic risk of a general class of overparameterized likelihood models, including deep models. The recent empirical success of large-scale models has motivated several theoretical studies to investigate a scenario wherein both the number of samples, n, and parameters, p, diverge to infinity and derive an asymptotic risk at the limit. However, these theorems are only valid for linear-in-feature models, such as generalized linear regression, kernel regression, and shallow neural networks. Hence, it is difficult to investigate a wider class of nonlinear models, including deep neural networks with three or more layers. In this study, we consider a likelihood maximization problem without the model constraints and analyze the upper bound of an asymptotic risk of an estimator with penalization. Technically, we combine a property of the Fisher information matrix with an extended Marchenko-Pastur law and associate the combination with empirical process techniques. The derived bound is general, as it describes both the double descent and the regularized risk curves, depending on the penalization. Our results are valid without the linear-in-feature constraints on models and allow us to derive the general spectral distributions of a Fisher information matrix from the likelihood. We demonstrate that several explicit models, such as parallel deep neural networks, ensemble learning, and residual networks, are in agreement with our theory. This result indicates that even large and deep models have a small asymptotic risk if they exhibit a specific structure, such as divisibility. To verify this finding, we conduct a real-data experiment with parallel deep neural networks. Our results expand the applicability of the asymptotic risk analysis, and may also contribute to the understanding and application of deep learning.

READ FULL TEXT
research
08/21/2022

Multiple Descent in the Multiple Random Feature Model

Recent works have demonstrated a double descent phenomenon in over-param...
research
06/03/2022

Regularization-wise double descent: Why it occurs and how to eliminate it

The risk of overparameterized models, in particular deep neural networks...
research
02/07/2020

Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

A fundamental question in modern machine learning is how deep neural net...
research
10/23/2018

A Continuous-Time View of Early Stopping for Least Squares Regression

We study the statistical properties of the iterates generated by gradien...
research
05/01/2021

Non-asymptotic Excess Risk Bounds for Classification with Deep Convolutional Neural Networks

In this paper, we consider the problem of binary classification with a c...
research
03/26/2021

Lower Bounds on the Generalization Error of Nonlinear Learning Models

We study in this paper lower bounds for the generalization error of mode...
research
02/14/2021

The Predictive Normalized Maximum Likelihood for Over-parameterized Linear Regression with Norm Constraint: Regret and Double Descent

A fundamental tenet of learning theory is that a trade-off exists betwee...

Please sign up or login with your details

Forgot password? Click here to reset