On the Turnpike to Design of Deep Neural Nets: Explicit Depth Bounds

01/08/2021
by   Timm Faulwasser, et al.
0

It is well-known that the training of Deep Neural Networks (DNN) can be formalized in the language of optimal control. In this context, this paper leverages classical turnpike properties of optimal control problems to attempt a quantifiable answer to the question of how many layers should be considered in a DNN. The underlying assumption is that the number of neurons per layer – i.e., the width of the DNN – is kept constant. Pursuing a different route than the classical analysis of approximation properties of sigmoidal functions, we prove explicit bounds on the required depths of DNNs based on asymptotic reachability assumptions and a dissipativity-inducing choice of the regularization terms in the training problem. Numerical results obtained for the two spiral task data set for classification indicate that the proposed estimates can provide non-conservative depth bounds.

READ FULL TEXT
research
08/23/2023

Solving Elliptic Optimal Control Problems using Physics Informed Neural Networks

In this work, we present and analyze a numerical solver for optimal cont...
research
12/24/2019

An Analysis of the Expressiveness of Deep Neural Network Architectures Based on Their Lipschitz Constants

Deep neural networks (DNNs) have emerged as a popular mathematical tool ...
research
02/06/2019

The role of a layer in deep neural networks: a Gaussian Process perspective

A fundamental question in deep learning concerns the role played by indi...
research
04/14/2021

Learning Regularization Parameters of Inverse Problems via Deep Neural Networks

In this work, we describe a new approach that uses deep neural networks ...
research
11/01/2019

Review: Ordinary Differential Equations For Deep Learning

To better understand and improve the behavior of neural networks, a rece...
research
06/07/2021

Representation mitosis in wide neural networks

Deep neural networks (DNNs) defy the classical bias-variance trade-off: ...
research
05/08/2021

Dynamic Game Theoretic Neural Optimizer

The connection between training deep neural networks (DNNs) and optimal ...

Please sign up or login with your details

Forgot password? Click here to reset