Countering Adversarial Images using Input Transformations

10/31/2017
by   Chuan Guo, et al.
0

This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60 black-box attacks by a variety of major attack methods

READ FULL TEXT

page 3

page 4

research
12/25/2018

Deflecting 3D Adversarial Point Clouds Through Outlier-Guided Removal

Neural networks are vulnerable to adversarial examples, which poses a th...
research
02/23/2020

VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

Deep neural network (DNN) models have proven to be vulnerable to adversa...
research
11/16/2020

Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks

We propose a voting ensemble of models trained by using block-wise trans...
research
01/02/2020

Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks

Despite achieving state-of-the-art performance across many domains, mach...
research
06/07/2022

Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition

Speaker recognition systems (SRSs) have recently been shown to be vulner...
research
03/05/2019

Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search

A plethora of recent work has shown that convolutional networks are not ...
research
03/28/2018

Defending against Adversarial Images using Basis Functions Transformations

We study the effectiveness of various approaches that defend against adv...

Please sign up or login with your details

Forgot password? Click here to reset