Error Diffusion Halftoning Against Adversarial Examples

01/23/2021
by   Shao-Yuan Lo, et al.
0

Adversarial examples contain carefully crafted perturbations that can fool deep neural networks (DNNs) into making wrong predictions. Enhancing the adversarial robustness of DNNs has gained considerable interest in recent years. Although image transformation-based defenses were widely considered at an earlier time, most of them have been defeated by adaptive attacks. In this paper, we propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples. Error diffusion halftoning projects an image into a 1-bit space and diffuses quantization error to neighboring pixels. This process can remove adversarial perturbations from a given image while maintaining acceptable image quality in the meantime in favor of recognition. Experimental results demonstrate that the proposed method is able to improve adversarial robustness even under advanced adaptive attacks, while most of the other image transformation-based defenses do not. We show that a proper image transformation can still be an effective defense approach.

READ FULL TEXT

page 1

page 4

research
01/08/2018

Spatially transformed adversarial examples

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
07/03/2021

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
07/10/2023

Enhancing Adversarial Robustness via Score-Based Optimization

Adversarial attacks have the potential to mislead deep neural network cl...
research
09/12/2019

An Empirical Investigation of Randomized Defenses against Adversarial Attacks

In recent years, Deep Neural Networks (DNNs) have had a dramatic impact ...
research
01/29/2018

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image clas...
research
01/01/2020

Erase and Restore: Simple, Accurate and Resilient Detection of L_2 Adversarial Examples

By adding carefully crafted perturbations to input images, adversarial e...
research
04/23/2018

VectorDefense: Vectorization as a Defense to Adversarial Examples

Training deep neural networks on images represented as grids of pixels h...

Please sign up or login with your details

Forgot password? Click here to reset