Online Alternate Generator against Adversarial Attacks

09/17/2020
by   Haofeng Li, et al.
1

The field of computer vision has witnessed phenomenal progress in recent years partially due to the development of deep convolutional neural networks. However, deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images. Some existing defense methods require to re-train attacked target networks and augment the train set via known adversarial attacks, which is inefficient and might be unpromising with unknown attack types. To overcome the above issues, we propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks. The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises. To avoid pretrained parameters exploited by attackers, we alternately update the generator and the synthesized image at the inference stage. Experimental results demonstrate that the proposed defensive scheme and method outperforms a series of state-of-the-art defending models against gray-box adversarial attacks.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 10

research
02/19/2018

Divide, Denoise, and Defend against Adversarial Attacks

Deep neural networks, although shown to be a successful class of machine...
research
12/08/2017

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

Neural networks are vulnerable to adversarial examples. This phenomenon ...
research
09/06/2020

Detection Defense Against Adversarial Attacks with Saliency Map

It is well established that neural networks are vulnerable to adversaria...
research
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
research
03/04/2021

Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

Deep neural networks recognize objects by analyzing local image details ...
research
05/28/2020

Adversarial Attacks and Defense on Textual Data: A Review

Deep leaning models have been used widely for various purposes in recent...
research
05/28/2020

Adversarial Attacks and Defense on Texts: A Survey

Deep leaning models have been used widely for various purposes in recent...

Please sign up or login with your details

Forgot password? Click here to reset