Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

01/24/2018
by   Konda Reddy Mopuri, et al.
0

Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. It is also demonstrated that there exist input-agnostic perturbations, called universal adversarial perturbations, which can change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i) task specific, (ii) require samples from the training data distribution, and (iii) perform complex optimizations. Also, because of the data dependence, fooling ability of the crafted perturbations is proportional to the available training data. In this paper, we present a novel, generalizable and data-free objective for crafting universal adversarial perturbations. Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers. Therefore, the proposed objective is generalizable to craft image-agnostic perturbations across multiple vision tasks such as object recognition, semantic segmentation and depth estimation. In the practical setting of black-box attacking scenario, we show that our objective outperforms the data dependent objectives to fool the learned models. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling ability of the crafted perturbations. Significant fooling rates achieved by our objective emphasize that the current deep learning models are now at an increased risk, since our objective generalizes across multiple tasks without the requirement of training data for crafting the perturbations.

READ FULL TEXT

page 6

page 7

page 8

page 10

page 11

page 12

page 13

page 15

03/03/2020

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malici...
07/18/2017

Fast Feature Fool: A data independent approach to universal adversarial perturbations

State-of-the-art object recognition Convolutional Neural Networks (CNNs)...
08/03/2018

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions

Deep learning models are susceptible to input specific noise, called adv...
05/16/2020

Universal Adversarial Perturbations: A Survey

Over the past decade, Deep Learning has emerged as a useful and efficien...
03/02/2022

Detecting Adversarial Perturbations in Multi-Task Perception

While deep neural networks (DNNs) achieve impressive performance on envi...
08/04/2020

Can Adversarial Weight Perturbations Inject Neural Backdoors?

Adversarial machine learning has exposed several security hazards of neu...
09/25/2021

MINIMAL: Mining Models for Data Free Universal Adversarial Triggers

It is well known that natural language models are vulnerable to adversar...