Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

05/23/2018
by   Fuxun Yu, et al.
0

In recent years, neural networks have demonstrated outstanding effectiveness in a large amount of applications.However, recent works have shown that neural networks are susceptible to adversarial examples, indicating possible flaws intrinsic to the network structures. To address this problem and improve the robustness of neural networks, we investigate the fundamental mechanisms behind adversarial examples and propose a novel robust training method via regulating adversarial gradients. The regulation effectively squeezes the adversarial gradients of neural networks and significantly increases the difficulty of adversarial example generation.Without any adversarial example involved, the robust training method could generate naturally robust networks, which are near-immune to various types of adversarial examples. Experiments show the naturally robust networks can achieve optimal accuracy against Fast Gradient Sign Method (FGSM) and C&W attacks on MNIST, Cifar10, and Google Speech Command dataset. Moreover, our proposed method also provides neural networks with consistent robustness against transferable attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2018

Learning More Robust Features with Adversarial Training

In recent years, it has been found that neural networks can be easily fo...
research
07/06/2021

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

Deep learning is vulnerable to adversarial examples. Many defenses based...
research
07/03/2020

Towards Robust Deep Learning with Ensemble Networks and Noisy Layers

In this paper we provide an approach for deep learning that protects aga...
research
02/16/2021

Globally-Robust Neural Networks

The threat of adversarial examples has motivated work on training certif...
research
09/12/2019

Feedback Learning for Improving the Robustness of Neural Networks

Recent research studies revealed that neural networks are vulnerable to ...
research
11/26/2017

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Deep neural networks have proven remarkably effective at solving many cl...
research
10/26/2021

Can't Fool Me: Adversarially Robust Transformer for Video Understanding

Deep neural networks have been shown to perform poorly on adversarial ex...

Please sign up or login with your details

Forgot password? Click here to reset