Perlin Noise Improve Adversarial Robustness

12/26/2021
by   Chengjun Tang, et al.
0

Adversarial examples are some special input that can perturb the output of a deep neural network, in order to make produce intentional errors in the learning algorithms in the production environment. Most of the present methods for generating adversarial examples require gradient information. Even universal perturbations that are not relevant to the generative model rely to some extent on gradient information. Procedural noise adversarial examples is a new way of adversarial example generation, which uses computer graphics noise to generate universal adversarial perturbations quickly while not relying on gradient information. Combined with the defensive idea of adversarial training, we use Perlin noise to train the neural network to obtain a model that can defend against procedural noise adversarial examples. In combination with the use of model fine-tuning methods based on pre-trained models, we obtain faster training as well as higher accuracy. Our study shows that procedural noise adversarial examples are defensible, but why procedural noise can generate adversarial examples and how to defend against other kinds of procedural noise adversarial examples that may emerge in the future remain to be investigated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

On Procedural Adversarial Noise Attack And Defense

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which...
research
05/21/2018

Bidirectional Learning for Robust Neural Networks

A multilayer perceptron can behave as a generative classifier by applyin...
research
10/27/2018

Regularization Effect of Fast Gradient Sign Method and its Generalization

Fast Gradient Sign Method (FSGM) is a popular method to generate adversa...
research
10/05/2020

Second-Order NLP Adversarial Examples

Adversarial example generation methods in NLP rely on models like langua...
research
01/09/2018

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...
research
05/30/2019

Interpretable Adversarial Training for Text

Generating high-quality and interpretable adversarial examples in the te...
research
04/25/2022

When adversarial examples are excusable

Neural networks work remarkably well in practice and theoretically they ...

Please sign up or login with your details

Forgot password? Click here to reset