Learning More Robust Features with Adversarial Training

04/20/2018
by   Shuangtao Li, et al.
0

In recent years, it has been found that neural networks can be easily fooled by adversarial examples, which is a potential safety hazard in some safety-critical applications. Many researchers have proposed various method to make neural networks more robust to white-box adversarial attacks, but an effective method have not been found so far. In this short paper, we focus on the robustness of the features learned by neural networks. We show that the features learned by neural networks are not robust, and find that the robustness of the learned features is closely related to the resistance against adversarial examples of neural networks. We also find that adversarial training against fast gradients sign method (FGSM) does not make the leaned features very robust, even if it can make the trained networks very resistant to FGSM attack. Then we propose a method, which can be seen as an extension of adversarial training, to train neural networks to learn more robust features. We perform experiments on MNIST and CIFAR-10 to evaluate our method, and the experiment results show that this method greatly improves the robustness of the learned features and the resistance to adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
research
11/16/2021

Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks

Bayesian Neural Networks (BNNs), unlike Traditional Neural Networks (TNN...
research
12/26/2017

Building Robust Deep Neural Networks for Road Sign Detection

Deep Neural Networks are built to generalize outside of training set in ...
research
06/25/2022

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Despite substantial advances in network architecture performance, the su...
research
07/04/2023

Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection

With the perpetual increase of complexity of the state-of-the-art deep n...
research
02/10/2021

Bayesian Inference with Certifiable Adversarial Robustness

We consider adversarial training of deep neural networks through the len...
research
02/22/2018

L2-Nonexpansive Neural Networks

This paper proposes a class of well-conditioned neural networks in which...

Please sign up or login with your details

Forgot password? Click here to reset