Adversarial Defense via Image Denoising with Chaotic Encryption

03/19/2022
by   Shi Hu, et al.
0

In the literature on adversarial examples, white box and black box attacks have received the most attention. The adversary is assumed to have either full (white) or no (black) access to the defender's model. In this work, we focus on the equally practical gray box setting, assuming an attacker has partial information. We propose a novel defense that assumes everything but a private key will be made available to the attacker. Our framework uses an image denoising procedure coupled with encryption via a discretized Baker map. Extensive testing against adversarial images (e.g. FGSM, PGD) crafted using various gradients shows that our defense achieves significantly better results on CIFAR-10 and CIFAR-100 than the state-of-the-art gray box defenses in both natural and adversarial accuracy.

READ FULL TEXT
research
07/08/2021

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a va...
research
03/11/2023

Investigating Stateful Defenses Against Black-Box Adversarial Examples

Defending machine-learning (ML) models against white-box adversarial att...
research
01/06/2021

Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

We propose a new defense mechanism against adversarial attacks inspired ...
research
10/03/2019

BUZz: BUffer Zones for defending adversarial examples in image classification

We propose a novel defense against all existing gradient based adversari...
research
05/16/2020

Encryption Inspired Adversarial Defense for Visual Classification

Conventional adversarial defenses reduce classification accuracy whether...
research
02/02/2023

Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense

Masked Image Modeling (MIM) has been a prevailing framework for self-sup...
research
09/13/2019

White-Box Adversarial Defense via Self-Supervised Data Estimation

In this paper, we study the problem of how to defend classifiers against...

Please sign up or login with your details

Forgot password? Click here to reset