The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples

03/28/2018 ∙ by Ayse Elvan Aydemir, et al. ∙ 0

Adversarial examples are known to have a negative effect on the performance of classifiers which have otherwise good performance on undisturbed images. These examples are generated by adding non-random noise to the testing samples in order to make classifier misclassify the given data. Adversarial attacks use these intentionally generated examples and they pose a security risk to the machine learning based systems. To be immune to such attacks, it is desirable to have a pre-processing mechanism which removes these effects causing misclassification while keeping the content of the image. JPEG and JPEG2000 image compression techniques suppress the high-frequency content taking the human visual system into account. In this paper, to reduce adversarial noise, JPEG and JPEG2000 compressions are applied to adversarial examples and their classification performance was measured.



There are no comments yet.


page 1

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.