FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

12/03/2020
by   Han Qiu, et al.
0

It is extensively studied that Deep Neural Networks (DNNs) are vulnerable to Adversarial Examples (AEs). With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models. It has become a popularity to leverage data augmentation techniques to preprocess input samples before inference to remove adversarial perturbations. By obfuscating the gradients of DNN models, these approaches can defeat a considerable number of conventional attacks. Unfortunately, advanced gradient-based attack techniques (e.g., BPDA and EOT) were introduced to invalidate these preprocessing effects. In this paper, we present FenceBox, a comprehensive framework to defeat various kinds of adversarial attacks. FenceBox is equipped with 15 data augmentation methods from three different categories. We comprehensively evaluated that these methods can effectively mitigate various adversarial attacks. FenceBox also provides APIs for users to easily deploy the defense over their models in different modes: they can either select an arbitrary preprocessing method, or a combination of functions for a better robustness guarantee, even under advanced adversarial attacks. We open-source FenceBox, and expect it can be used as a standard toolkit to facilitate the research of adversarial attacks and defenses.

READ FULL TEXT

page 7

page 10

research
05/27/2020

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversari...
research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...
research
12/13/2020

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Public resources and services (e.g., datasets, training platforms, pre-t...
research
04/22/2021

Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions

Recently, adversarial attack methods have been developed to challenge th...
research
06/26/2020

A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models

Deep Neural Networks are well known to be vulnerable to adversarial atta...
research
01/21/2021

A Person Re-identification Data Augmentation Method with Adversarial Defense Effect

The security of the Person Re-identification(ReID) model plays a decisiv...
research
08/12/2023

DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Neural networks are prone to learn easy solutions from superficial stati...

Please sign up or login with your details

Forgot password? Click here to reset