A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

07/30/2020
by   Yi Zeng, et al.
0

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results. Such variability has been a potential risk for systems in real-life equipped DNNs as core components. Numerous efforts have been put into research on how to protect DNN models from being tackled by AEs. However, no previous work can efficiently reduce the effects caused by novel adversarial attacks and be compatible with real-life constraints at the same time. In this paper, we focus on developing a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints. From basic affine transformations, we integrate three transformations with randomized coefficients that fine-tuned respecting the amount of change to the defended sample. Comparing to 4 state-of-art defense methods published in top-tier AI conferences in the past two years, our method demonstrates outstanding robustness and efficiency. It is worth highlighting that, our model can withstand advanced adaptive attack, namely BPDA with 50 rounds, and still helps the target model maintain an accuracy around 80 attack success rate to almost zero.

READ FULL TEXT
research
12/03/2020

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

It is extensively studied that Deep Neural Networks (DNNs) are vulnerabl...
research
05/03/2021

Physical world assistive signals for deep neural network classifiers – neither defense nor attack

Deep Neural Networks lead the state of the art of computer vision tasks....
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
05/27/2020

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversari...
research
09/12/2019

An Empirical Investigation of Randomized Defenses against Adversarial Attacks

In recent years, Deep Neural Networks (DNNs) have had a dramatic impact ...
research
04/09/2018

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
08/03/2020

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Recent studies identify that Deep learning Neural Networks (DNNs) are vu...

Please sign up or login with your details

Forgot password? Click here to reset