Generating Unrestricted Adversarial Examples via Three Parameters

03/13/2021
by   Hanieh Naderi, et al.
0

Deep neural networks have been shown to be vulnerable to adversarial examples deliberately constructed to misclassify victim models. As most adversarial examples have restricted their perturbations to L_p-norm, existing defense methods have focused on these types of perturbations and less attention has been paid to unrestricted adversarial examples; which can create more realistic attacks, able to deceive models without affecting human predictions. To address this problem, the proposed adversarial attack generates an unrestricted adversarial example with a limited number of parameters. The attack selects three points on the input image and based on their locations transforms the image into an adversarial example. By limiting the range of movement and location of these three points and using a discriminatory network, the proposed unrestricted adversarial example preserves the image appearance. Experimental results show that the proposed adversarial examples obtain an average success rate of 93.5 also reduces the model accuracy by an average of 73 FMNIST, SVHN, CIFAR10, CIFAR100, and ImageNet. It should be noted that, in the case of attacks, lower accuracy in the victim model denotes a more successful attack. The adversarial train of the attack also improves model robustness against a randomly transformed image.

READ FULL TEXT

page 8

page 13

page 15

research
08/31/2018

MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

Despite being popularly used in many application domains such as image r...
research
01/31/2023

Reverse engineering adversarial attacks with fingerprints from adversarial examples

In spite of intense research efforts, deep neural networks remain vulner...
research
09/24/2019

Intelligent image synthesis to attack a segmentation CNN using adversarial learning

Deep learning approaches based on convolutional neural networks (CNNs) h...
research
12/31/2022

Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence

Deep neural networks are vulnerable to adversarial attacks. In this pape...
research
08/02/2019

AdvGAN++ : Harnessing latent layers for adversary generation

Adversarial examples are fabricated examples, indistinguishable from the...
research
04/05/2021

Adversarial Attack in the Context of Self-driving

In this paper, we propose a model that can attack segmentation models wi...
research
02/01/2020

AdvJND: Generating Adversarial Examples with Just Noticeable Difference

Compared with traditional machine learning models, deep neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset