Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

01/26/2020
by   Rehana Mahfuz, et al.
25

Gradient-based adversarial attacks on neural networks can be crafted in a variety of ways by varying either how the attack algorithm relies on the gradient, the network architecture used for crafting the attack, or both. Most recent work has focused on defending classifiers in a case where there is no uncertainty about the attacker's behavior (i.e., the attacker is expected to generate a specific attack using a specific network architecture). However, if the attacker is not guaranteed to behave in a certain way, the literature lacks methods in devising a strategic defense. We fill this gap by simulating the attacker's noisy perturbation using a variety of attack algorithms based on gradients of various classifiers. We perform our analysis using a pre-processing Denoising Autoencoder (DAE) defense that is trained with the simulated noise. We demonstrate significant improvements in post-attack accuracy, using our proposed ensemble-trained defense, compared to a situation where no effort is made to handle uncertainty.

READ FULL TEXT
research
04/03/2021

Mitigating Gradient-based Adversarial Attacks via Denoising and Compression

Gradient-based adversarial attacks on deep neural networks pose a seriou...
research
02/18/2020

Deflecting Adversarial Attacks

There has been an ongoing cycle where stronger defenses against adversar...
research
05/31/2021

Gradient-based Data Subversion Attack Against Binary Classifiers

Machine learning based data-driven technologies have shown impressive pe...
research
05/06/2021

Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model

Deep neural networks have been shown to suffer from critical vulnerabili...
research
04/07/2021

The art of defense: letting networks fool the attacker

Some deep neural networks are invariant to some input transformations, s...
research
09/19/2019

Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation

Deep Neural Network based classifiers are known to be vulnerable to pert...
research
08/29/2020

Improving Resistance to Adversarial Deformations by Regularizing Gradients

Improving the resistance of deep neural networks against adversarial att...

Please sign up or login with your details

Forgot password? Click here to reset