Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

05/27/2020
by   Han Qiu, et al.
0

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversarial Examples (AEs). A large amount of efforts have been spent to launch and heat the arms race between the attackers and defenders. Recently, advanced gradient-based attack techniques were proposed (e.g., BPDA and EOT), which have defeated a considerable number of existing defense methods. Up to today, there are still no satisfactory solutions that can effectively and efficiently defend against those attacks. In this paper, we make a steady step towards mitigating those advanced gradient-based attacks with two major contributions. First, we perform an in-depth analysis about the root causes of those attacks, and propose four properties that can break the fundamental assumptions of those attacks. Second, we identify a set of operations that can meet those properties. By integrating these operations, we design two preprocessing functions that can invalidate these powerful attacks. Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years. The defender can employ our solutions to constrain the attack success rate below 7 spent dozens of GPU hours.

READ FULL TEXT

page 1

page 6

research
12/03/2020

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

It is extensively studied that Deep Neural Networks (DNNs) are vulnerabl...
research
09/27/2021

MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, whic...
research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...
research
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...
research
04/20/2021

MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis

Machine learning with deep neural networks (DNNs) has become one of the ...
research
07/01/2019

Accurate, reliable and fast robustness evaluation

Throughout the past five years, the susceptibility of neural networks to...
research
04/03/2021

Gradient-based Adversarial Deep Modulation Classification with Data-driven Subsampling

Automatic modulation classification can be a core component for intellig...

Please sign up or login with your details

Forgot password? Click here to reset