Large-time asymptotics in deep learning

08/06/2020
by   Carlos Esteve, et al.
0

It is by now well-known that practical deep supervised learning may roughly be cast as an optimal control problem for a specific discrete-time, nonlinear dynamical system called an artificial neural network. In this work, we consider the continuous-time formulation of the deep supervised learning problem, and study the latter's behavior when the final time horizon increases, a fact that can be interpreted as increasing the number of layers in the neural network setting.When considering the classical regularized empirical risk minimization problem, we show that, in long time, the optimal states converge to zero training error, namely approach the zero training error regime, whilst the optimal control parameters approach, on an appropriate scale, minimal norm parameters with corresponding states precisely in the zero training error regime. This result provides an alternative theoretical underpinning to the notion that neural networks learn best in the overparametrized regime, when seen from the large layer perspective. We also propose a learning problem consisting of minimizing a cost with a state tracking term, and establish the well-known turnpike property, which indicates that the solutions of the learning problem in long time intervals consist of three pieces, the first and the last of which being transient short-time arcs, and the middle piece being a long-time arc staying exponentially close to the optimal solution of an associated static learning problem. This property in fact stipulates a quantitative estimate for the number of layers required to reach the zero training error regime. Both of the aforementioned asymptotic regimes are addressed in the context of continuous-time and continuous space-time neural networks, the latter taking the form of nonlinear, integro-differential equations, hence covering residual neural networks with both fixed and possibly variable depths.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2021

Sparse approximation in learning via neural ODEs

We consider the continuous-time, neural ordinary differential equation (...
research
07/05/2020

Depth-Adaptive Neural Networks from the Optimal Control viewpoint

In recent years, deep learning has been connected with optimal control a...
research
10/20/2022

Neural ODEs as Feedback Policies for Nonlinear Optimal Control

Neural ordinary differential equations (Neural ODEs) model continuous ti...
research
08/28/2020

Control On the Manifolds Of Mappings As a Setting For Deep Learning

We use a control-theoretic setting to model the process of training (dee...
research
02/27/2021

Spline parameterization of neural network controls for deep learning

Based on the continuous interpretation of deep learning cast as an optim...
research
06/16/2020

Neural Optimal Control for Representation Learning

The intriguing connections recently established between neural networks ...
research
09/17/2020

A Principle of Least Action for the Training of Neural Networks

Neural networks have been achieving high generalization performance on m...

Please sign up or login with your details

Forgot password? Click here to reset