Meta Adversarial Perturbations

11/19/2021
by   Chia-Hung Yuan, et al.
0

A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack. However, the computation of an adversarial perturbation for a new data point requires solving a time-consuming optimization problem from scratch. To generate a stronger attack, it normally requires updating a data point with more iterations. In this paper, we show the existence of a meta adversarial perturbation (MAP), a better initialization that causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update, and propose an algorithm for computing such perturbations. We conduct extensive experiments, and the empirical results demonstrate that state-of-the-art deep neural networks are vulnerable to meta perturbations. We further show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2019

Universal Adversarial Perturbation for Text Classification

Given a state-of-the-art deep neural network text classifier, we show th...
research
10/26/2016

Universal adversarial perturbations

Given a state-of-the-art deep neural network classifier, we show the exi...
research
12/06/2017

Generative Adversarial Perturbations

In this paper, we propose novel generative models for creating adversari...
research
02/12/2018

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

High sensitivity of neural networks against malicious perturbations on i...
research
12/16/2018

Trust Region Based Adversarial Attack on Neural Networks

Deep Neural Networks are quite vulnerable to adversarial perturbations. ...
research
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
research
06/12/2022

Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

Embodied agents in vision navigation coupled with deep neural networks h...

Please sign up or login with your details

Forgot password? Click here to reset