On Procedural Adversarial Noise Attack And Defense

08/10/2021
by   Jun Yan, et al.
1

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images. Researchers have been devoted to promoting the research on the universal adversarial perturbations (UAPs) which are gradient-free and have little prior knowledge on data distributions. Procedural adversarial noise attack is a data-free universal perturbation generation method. In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions: Simplex noise and Worley noise. In our framework, the shading which disturbs visual classification is generated with rendering technology. Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.

READ FULL TEXT

page 5

page 7

page 16

page 17

research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
12/26/2021

Perlin Noise Improve Adversarial Robustness

Adversarial examples are some special input that can perturb the output ...
research
03/09/2023

Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation

The vulnerability of deep neural networks to adversarial examples has le...
research
09/13/2017

A Learning and Masking Approach to Secure Learning

Deep Neural Networks (DNNs) have been shown to be vulnerable against adv...
research
02/01/2023

Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks

Deep learning models achieve excellent performance in numerous machine l...
research
09/19/2019

Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation

Deep Neural Network based classifiers are known to be vulnerable to pert...
research
06/30/2022

Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations

Deep neural networks (DNNs) have been shown to be vulnerable against adv...

Please sign up or login with your details

Forgot password? Click here to reset