An Empirical Investigation of Randomized Defenses against Adversarial Attacks

09/12/2019
by   Yannik Potdevin, et al.
0

In recent years, Deep Neural Networks (DNNs) have had a dramatic impact on a variety of problems that were long considered very difficult, e. g., image classification and automatic language translation to name just a few. The accuracy of modern DNNs in classification tasks is remarkable indeed. At the same time, attackers have devised powerful methods to construct specially-crafted malicious inputs (often referred to as adversarial examples) that can trick DNNs into mis-classifying them. What is worse is that despite the many defense mechanisms proposed to protect DNNs against adversarial attacks, attackers are often able to circumvent these defenses, rendering them useless. This state of affairs is extremely worrying, especially since machine learning systems get adopted at scale. In this paper, we propose a scientific evaluation methodology aimed at assessing the quality, efficacy, robustness and efficiency of randomized defenses to protect DNNs against adversarial examples. Using this methodology, we evaluate a variety of defense mechanisms. In addition, we also propose a defense mechanism we call Randomly Perturbed Ensemble Neural Networks (RPENNs). We provide a thorough and comprehensive evaluation of the considered defense mechanisms against a white-box attacker model, six different adversarial attack methods and using the ILSVRC2012 validation data set.

READ FULL TEXT

page 12

page 18

research
02/12/2019

A survey on Adversarial Attacks and Defenses in Text

Deep neural networks (DNNs) have shown an inherent vulnerability to adve...
research
04/12/2019

Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

In image classification of deep learning, adversarial examples where inp...
research
01/23/2021

Error Diffusion Halftoning Against Adversarial Examples

Adversarial examples contain carefully crafted perturbations that can fo...
research
05/30/2018

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

Deep neural networks (DNNs) provide excellent performance across a wide ...
research
10/02/2020

An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders

Visual-based recommender systems (VRSs) enhance recommendation performan...
research
03/31/2020

A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays

Recently, deep neural networks (DNNs) have made great progress on automa...
research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...

Please sign up or login with your details

Forgot password? Click here to reset