Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

11/12/2019
by   Arash Rahnama, et al.
0

Deep neural networks (DNNs) are vulnerable to subtle adversarial perturbations applied to the input. These adversarial perturbations, though imperceptible, can easily mislead the DNN. In this work, we take a control theoretic approach to the problem of robustness in DNNs. We treat each individual layer of the DNN as a nonlinear dynamical system and use Lyapunov theory to prove stability and robustness locally. We then proceed to prove stability and robustness globally for the entire DNN. We develop empirically tight bounds on the response of the output layer, or any hidden layer, to adversarial perturbations added to the input, or the input of hidden layers. Recent works have proposed spectral norm regularization as a solution for improving robustness against l2 adversarial attacks. Our results give new insights into how spectral norm regularization can mitigate the adversarial effects. Finally, we evaluate the power of our approach on a variety of data sets and network architectures and against some of the well-known adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...
research
04/21/2022

A Mask-Based Adversarial Defense Scheme

Adversarial attacks hamper the functionality and accuracy of Deep Neural...
research
11/19/2018

Generalizable Adversarial Training via Spectral Normalization

Deep neural networks (DNNs) have set benchmarks on a wide array of super...
research
03/23/2018

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization

Deep neural networks have lately shown tremendous performance in various...
research
07/19/2020

Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency

There has been a recent surge in research on adversarial perturbations t...
research
10/20/2020

Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods

Deep neural networks have achieved state-of-the-art performance in a var...
research
05/27/2022

fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems

Recently, Deep Neural Networks (DNNs) have achieved remarkable performan...

Please sign up or login with your details

Forgot password? Click here to reset