Adversarial examples by perturbing high-level features in intermediate decoder layers

10/14/2021
by   Vojtěch Čermák, et al.
0

We propose a novel method for creating adversarial examples. Instead of perturbing pixels, we use an encoder-decoder representation of the input image and perturb intermediate layers in the decoder. This changes the high-level features provided by the generative model. Therefore, our perturbation possesses semantic meaning, such as a longer beak or green tints. We formulate this task as an optimization problem by minimizing the Wasserstein distance between the adversarial and initial images under a misclassification constraint. We employ the projected gradient method with a simple inexact projection. Due to the projection, all iterations are feasible, and our method always generates adversarial images. We perform numerical experiments on the MNIST and ImageNet datasets in both targeted and untargeted settings. We demonstrate that our adversarial images are much less vulnerable to steganographic defence techniques than pixel-based attacks. Moreover, we show that our method modifies key features such as edges and that defence techniques based on adversarial training are vulnerable to our attacks.

READ FULL TEXT

page 5

page 6

page 7

page 10

page 11

page 12

research
05/01/2019

Dropping Pixels for Adversarial Robustness

Deep neural networks are vulnerable against adversarial examples. In thi...
research
09/11/2019

Sparse and Imperceivable Adversarial Attacks

Neural networks have been proven to be vulnerable to a variety of advers...
research
04/20/2023

Diversifying the High-level Features for better Adversarial Transferability

Given the great threat of adversarial attacks against Deep Neural Networ...
research
06/13/2023

I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

Modern image-to-text systems typically adopt the encoder-decoder framewo...
research
10/27/2019

Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks

Recent studies have shown convolution neural networks (CNNs) for image r...
research
03/11/2021

DAFAR: Defending against Adversaries by Feedback-Autoencoder Reconstruction

Deep learning has shown impressive performance on challenging perceptual...
research
04/26/2023

Improving Adversarial Transferability via Intermediate-level Perturbation Decay

Intermediate-level attacks that attempt to perturb feature representatio...

Please sign up or login with your details

Forgot password? Click here to reset