Fast Feature Fool: A data independent approach to universal adversarial perturbations

07/18/2017
by   Konda Reddy Mopuri, et al.
0

State-of-the-art object recognition Convolutional Neural Networks (CNNs) are shown to be fooled by image agnostic perturbations, called universal adversarial perturbations. It is also observed that these perturbations generalize across multiple networks trained on the same target data. However, these algorithms require training data on which the CNNs were trained and compute adversarial perturbations via complex optimization. The fooling performance of these approaches is directly proportional to the amount of available training data. This makes them unsuitable for practical attacks since its unreasonable for an attacker to have access to the training data. In this paper, for the first time, we propose a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition. We further show that these perturbations are transferable across multiple network architectures trained either on same or different data. In the absence of data, our method generates universal adversarial perturbations efficiently via fooling the features learned at multiple layers thereby causing CNNs to misclassify. Experiments demonstrate impressive fooling rates and surprising transferability for the proposed universal perturbations generated without any training data.

READ FULL TEXT

page 5

page 6

page 9

research
01/24/2018

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

Machine learning models are susceptible to adversarial perturbations: sm...
research
10/28/2020

Transferable Universal Adversarial Perturbations Using Generative Models

Deep neural networks tend to be vulnerable to adversarial perturbations,...
research
11/30/2022

Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations

Multi-instance learning (MIL) is a great paradigm for dealing with compl...
research
06/26/2019

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

The unprecedented success of deep neural networks in various application...
research
10/18/2022

Transferable Unlearnable Examples

With more people publishing their personal data online, unauthorized dat...
research
06/27/2023

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

Detecting adversarial samples that are carefully crafted to fool the mod...
research
10/04/2020

A Study for Universal Adversarial Attacks on Texture Recognition

Given the outstanding progress that convolutional neural networks (CNNs)...

Please sign up or login with your details

Forgot password? Click here to reset