Label Universal Targeted Attack

05/27/2019
by   Naveed Akhtar, et al.
1

We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability. Our attack stochastically maximizes the log-probability of the target label for the source class with first order gradient optimization, while accounting for the gradient moments. It also suppresses the leakage of attack information to the non-source classes for avoiding the attack suspicions. The perturbations resulting from our attack achieve high fooling ratios on the large-scale ImageNet and VGGFace models, and transfer well to the Physical World. Given full control over the perturbation scope in LUTA, we also demonstrate it as a tool for deep model autopsy. The proposed attack reveals interesting perturbation patterns and observations regarding the deep models.

READ FULL TEXT

page 6

page 7

page 12

page 13

page 14

research
10/07/2020

Double Targeted Universal Adversarial Perturbations

Despite their impressive performance, deep neural networks (DNNs) are wi...
research
10/07/2020

CD-UAP: Class Discriminative Universal Adversarial Perturbation

A single universal adversarial perturbation (UAP) can be added to all na...
research
02/19/2022

Label-Smoothed Backdoor Attack

By injecting a small number of poisoned samples into the training set, b...
research
06/20/2021

Attack to Fool and Explain Deep Networks

Deep visual models are susceptible to adversarial perturbations to input...
research
01/05/2023

Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack

We propose a stealthy and powerful backdoor attack on neural networks ba...
research
04/29/2020

Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability

We consider the blackbox transfer-based targeted adversarial attack thre...
research
03/17/2021

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

We design blackbox transfer-based targeted adversarial attacks for an en...

Please sign up or login with your details

Forgot password? Click here to reset