Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense

Deep Neural Networks often suffer from lack of robustness to adversarial noise. To mitigate this drawback, authors have proposed different approaches, such as adding regularizers or training using adversarial examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each intermediate representation. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes. We provide theoretical justification for this regularizer and demonstrate its effectiveness when facing adversarial noise on classical supervised learning vision datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2021

Bridged Adversarial Training

Adversarial robustness is considered as a required property of deep neur...
research
06/04/2021

Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottlen...
research
03/27/2023

Regularize implicit neural representation by itself

This paper proposes a regularizer called Implicit Neural Representation ...
research
09/19/2019

Training Robust Deep Neural Networks via Adversarial Noise Propagation

Deep neural networks have been found vulnerable to noises like adversari...
research
08/03/2021

AdvRush: Searching for Adversarially Robust Neural Architectures

Deep neural networks continue to awe the world with their remarkable per...
research
05/22/2018

Adversarially Robust Training through Structured Gradient Regularization

We propose a novel data-dependent structured gradient regularizer to inc...
research
10/22/2020

Adversarial Robustness of Supervised Sparse Coding

Several recent results provide theoretical insights into the phenomena o...

Please sign up or login with your details

Forgot password? Click here to reset