A study of the effect of JPG compression on adversarial images

Neural network image classifiers are known to be vulnerable to adversarial images, i.e., natural images which have been modified by an adversarial perturbation specifically designed to be imperceptible to humans yet fool the classifier. Not only can adversarial images be generated easily, but these images will often be adversarial for networks trained on disjoint subsets of data or with different architectures. Adversarial images represent a potential security risk as well as a serious machine learning challenge---it is clear that vulnerable neural networks perceive images very differently from humans. Noting that virtually every image classification data set is composed of JPG images, we evaluate the effect of JPG compression on the classification of adversarial images. For Fast-Gradient-Sign perturbations of small magnitude, we found that JPG compression often reverses the drop in classification accuracy to a large extent, but not always. As the magnitude of the perturbations increases, JPG recompression alone is insufficient to reverse the effect.

READ FULL TEXT
research
10/26/2016

Universal adversarial perturbations

Given a state-of-the-art deep neural network classifier, we show the exi...
research
10/27/2019

EdgeFool: An Adversarial Image Enhancement Filter

Adversarial examples are intentionally perturbed images that mislead cla...
research
08/01/2016

Early Methods for Detecting Adversarial Images

Many machine learning classifiers are vulnerable to adversarial perturba...
research
11/04/2019

Fast-UAP: Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors

Convolutional neural networks (CNN) have become one of the most popular ...
research
11/20/2019

Generate (non-software) Bugs to Fool Classifiers

In adversarial attacks intended to confound deep learning models, most s...
research
01/29/2020

Just Noticeable Difference for Machines to Generate Adversarial Images

One way of designing a robust machine learning algorithm is to generate ...
research
12/12/2021

Stereoscopic Universal Perturbations across Different Architectures and Datasets

We study the effect of adversarial perturbations of images on deep stere...

Please sign up or login with your details

Forgot password? Click here to reset