An Empirical Review of Adversarial Defenses

12/10/2020
by   Ayush Goel, et al.
0

From face recognition systems installed in phones to self-driving cars, the field of AI is witnessing rapid transformations and is being integrated into our everyday lives at an incredible pace. Any major failure in these system's predictions could be devastating, leaking sensitive information or even costing lives (as in the case of self-driving cars). However, deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks. A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms. In this paper, we compile and test numerous approaches to defend against such adversarial attacks. Out of the ones explored, we found two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model. We demonstrate that these techniques are also resistant to both higher noise levels as well as different kinds of adversarial attacks (although not tested against all). We also develop a framework for deciding the suitable defense technique to use against attacks, based on the nature of the application and resource constraints of the Deep Neural Network.

READ FULL TEXT

page 3

page 6

page 11

research
10/05/2021

Adversarial defenses via a mixture of generators

In spite of the enormous success of neural networks, adversarial example...
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
05/28/2020

Adversarial Attacks and Defense on Textual Data: A Review

Deep leaning models have been used widely for various purposes in recent...
research
02/19/2020

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

Designing effective defense against adversarial attacks is a crucial top...
research
05/28/2020

Adversarial Attacks and Defense on Texts: A Survey

Deep leaning models have been used widely for various purposes in recent...
research
07/29/2020

Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data

There has been considerable and growing interest in applying machine lea...
research
04/15/2019

Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction

Deep Neural Networks (DNNs) have tremendous potential in advancing the v...

Please sign up or login with your details

Forgot password? Click here to reset