Defending against Adversarial Images using Basis Functions Transformations

03/28/2018
by   Uri Shaham, et al.
0

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to pre-determined subset of the basis functions, of which a "low frequency attack" is a special case.

READ FULL TEXT

page 2

page 4

research
07/08/2021

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a va...
research
09/16/2013

Estimation of intrinsic volumes from digital grey-scale images

Local algorithms are common tools for estimating intrinsic volumes from ...
research
04/01/2019

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks, which can fo...
research
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
research
10/31/2017

Countering Adversarial Images using Input Transformations

This paper investigates strategies that defend against adversarial-examp...
research
07/26/2021

Adversarial Attacks with Time-Scale Representations

We propose a novel framework for real-time black-box universal attacks w...
research
02/15/2021

CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification

Adversarial attack is aimed at fooling the target classifier with imperc...

Please sign up or login with your details

Forgot password? Click here to reset