Double Targeted Universal Adversarial Perturbations

10/07/2020
by   Philipp Benz, et al.
1

Despite their impressive performance, deep neural networks (DNNs) are widely known to be vulnerable to adversarial attacks, which makes it challenging for them to be deployed in security-sensitive applications, such as autonomous driving. Image-dependent perturbations can fool a network for one specific image, while universal adversarial perturbations are capable of fooling a network for samples from all classes without selection. We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations. This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions. Targeting the source and sink class simultaneously, we term it double targeted attack (DTA). This provides an attacker with the freedom to perform precise attacks on a DNN model while raising little suspicion. We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.

READ FULL TEXT

page 10

page 14

research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
10/07/2021

One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features

It is well understood that modern deep networks are vulnerable to advers...
research
11/19/2020

Multi-Task Adversarial Attack

Deep neural networks have achieved impressive performance in various are...
research
10/07/2020

CD-UAP: Class Discriminative Universal Adversarial Perturbation

A single universal adversarial perturbation (UAP) can be added to all na...
research
05/27/2019

Label Universal Targeted Attack

We introduce Label Universal Targeted Attack (LUTA) that makes a deep mo...
research
08/11/2021

Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

Adversarial patch attack against image classification deep neural networ...
research
07/11/2019

Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn

Despite their accuracy, neural network-based classifiers are still prone...

Please sign up or login with your details

Forgot password? Click here to reset