A priori guarantees of finite-time convergence for Deep Neural Networks

09/16/2020
by   Anushree Rankawat, et al.
0

In this paper, we perform Lyapunov based analysis of the loss function to derive an a priori upper bound on the settling time of deep neural networks. While previous studies have attempted to understand deep learning using control theory framework, there is limited work on a priori finite time convergence analysis. Drawing from the advances in analysis of finite-time control of non-linear systems, we provide a priori guarantees of finite-time convergence in a deterministic control theoretic setting. We formulate the supervised learning framework as a control problem where weights of the network are control inputs and learning translates into a tracking problem. An analytical formula for finite-time upper bound on settling time is computed a priori under the assumptions of boundedness of input. Finally, we prove the robustness and sensitivity of the loss function against input perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

A Distributional Robustness Certificate by Randomized Smoothing

The robustness of deep neural networks against adversarial example attac...
research
01/27/2022

A Generalization of the Stratonovich's Value of Information and Application to Privacy-Utility Trade-off

The Stratonovich's value of information (VoI) is quantity that measure h...
research
05/06/2022

The NT-Xent loss upper bound

Self-supervised learning is a growing paradigm in deep representation le...
research
05/06/2018

Reachability Analysis of Deep Neural Networks with Provable Guarantees

Verifying correctness of deep neural networks (DNNs) is challenging. We ...
research
12/17/2021

A Robust Optimization Approach to Deep Learning

Many state-of-the-art adversarial training methods leverage upper bounds...
research
01/29/2020

An Upper Bound of the Bias of Nadaraya-Watson Kernel Regression under Lipschitz Assumptions

The Nadaraya-Watson kernel estimator is among the most popular nonparame...
research
01/09/2020

An Internal Covariate Shift Bounding Algorithm for Deep Neural Networks by Unitizing Layers' Outputs

Batch Normalization (BN) techniques have been proposed to reduce the so-...

Please sign up or login with your details

Forgot password? Click here to reset