Robust Learning via Persistency of Excitation

06/03/2021
by   Kaustubh Sridhar, et al.
0

Improving adversarial robustness of neural networks remains a major challenge. Fundamentally, training a network is a parameter estimation problem. In adaptive control theory, maintaining persistency of excitation (PoE) is integral to ensuring convergence of parameter estimates in dynamical systems to their robust optima. In this work, we show that network training using gradient descent is equivalent to a dynamical system parameter estimation problem. Leveraging this relationship, we prove a sufficient condition for PoE of gradient descent is achieved when the learning rate is less than the inverse of the Lipschitz constant of the gradient of loss function. We provide an efficient technique for estimating the corresponding Lipschitz constant using extreme value theory and demonstrate that by only scaling the learning rate schedule we can increase adversarial accuracy by up to 15 datasets. Our approach also universally increases the adversarial accuracy by 0.1 the AutoAttack benchmark, where every small margin of improvement is significant.

READ FULL TEXT

page 2

page 15

page 16

page 21

research
03/07/2018

WNGrad: Learn the Learning Rate in Gradient Descent

Adjusting the learning rate schedule in stochastic gradient methods is a...
research
05/28/2019

Concavifiability and convergence: necessary and sufficient conditions for gradient descent analysis

Convergence of the gradient descent algorithm has been attracting renewe...
research
07/10/2020

Learning Unstable Dynamical Systems with Time-Weighted Logarithmic Loss

When training the parameters of a linear dynamical model, the gradient d...
research
05/17/2021

An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis

Adversarial training has gained great popularity as one of the most effe...
research
05/18/2020

Parsimonious Computing: A Minority Training Regime for Effective Prediction in Large Microarray Expression Data Sets

Rigorous mathematical investigation of learning rates used in back-propa...
research
08/03/2022

Gradient descent provably escapes saddle points in the training of shallow ReLU networks

Dynamical systems theory has recently been applied in optimization to pr...
research
12/20/2021

Adversarially Robust Stability Certificates can be Sample-Efficient

Motivated by bridging the simulation to reality gap in the context of sa...

Please sign up or login with your details

Forgot password? Click here to reset