Deflecting Adversarial Attacks with Pixel Deflection

01/26/2018
by   Aaditya Prakash, et al.
0

CNNs are poised to become integral parts of many critical systems. Despite their robustness to natural variations, image pixel values can be manipulated, via small, carefully crafted, imperceptible perturbations, to cause a model to misclassify images. We present an algorithm to process an image so that classification accuracy is significantly preserved in the presence of such adversarial manipulations. Image classifiers tend to be robust to natural noise, and adversarial attacks tend to be agnostic to object location. These observations motivate our strategy, which leverages model robustness to defend against adversarial perturbations by forcing the image to match natural image statistics. Our algorithm locally corrupts the image by redistributing pixel values via a process we term pixel deflection. A subsequent wavelet-based denoising operation softens this corruption, as well as some of the adversarial changes. We demonstrate experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks. Our results compare favorably with current state-of-the-art defenses, without requiring retraining or modifying the CNN.

READ FULL TEXT

page 1

page 4

page 11

page 12

page 13

page 14

page 15

research
08/22/2021

Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations

Upon the discovery of adversarial attacks, robust models have become obl...
research
03/25/2019

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Adversarial perturbations dramatically decrease the accuracy of state-of...
research
03/26/2022

Reverse Engineering of Imperceptible Adversarial Image Perturbations

It has been well recognized that neural network based image classifiers ...
research
06/18/2021

Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective

The vulnerability of deep neural networks to adversarial examples, which...
research
01/20/2022

Steerable Pyramid Transform Enables Robust Left Ventricle Quantification

Although multifarious variants of convolutional neural networks (CNNs) h...
research
10/22/2020

Adversarial Attacks on Binary Image Recognition Systems

We initiate the study of adversarial attacks on models for binary (i.e. ...
research
08/08/2018

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Many machine learning image classifiers are vulnerable to adversarial at...

Please sign up or login with your details

Forgot password? Click here to reset