A principled approach for generating adversarial images under non-smooth dissimilarity metrics

08/05/2019
by   Aram-Alexandre Pooladian, et al.
0

Deep neural networks are vulnerable to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology catered not only for cases where the perturbations are measured by ℓ_p norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, ℓ_1, ℓ_2, ℓ_∞ perturbations, and the ℓ_0 counting "norm", i.e. true sparseness. Our approach to generating perturbations is a natural extension of our recent work, the LogBarrier attack, which previously required the metric to be differentiable. We demonstrate our new algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We attack undefended and defended models, and show that our algorithm transfers to various datasets with little parameter tuning. In particular, in the ℓ_0 case, our algorithm finds significantly smaller perturbations compared to multiple existing methods

READ FULL TEXT
research
10/24/2017

One pixel attack for fooling deep neural networks

Recent research has revealed that the output of Deep Neural Networks (DN...
research
12/16/2018

Trust Region Based Adversarial Attack on Neural Networks

Deep Neural Networks are quite vulnerable to adversarial perturbations. ...
research
07/18/2018

Harmonic Adversarial Attack Method

Adversarial attacks find perturbations that can fool models into misclas...
research
06/19/2018

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

Designing models that are robust to small adversarial perturbations of t...
research
02/22/2020

Polarizing Front Ends for Robust CNNs

The vulnerability of deep neural networks to small, adversarially design...
research
07/26/2018

A general metric for identifying adversarial images

It is well known that a determined adversary can fool a neural network b...
research
05/30/2023

What Can We Learn from Unlearnable Datasets?

In an era of widespread web scraping, unlearnable dataset methods have t...

Please sign up or login with your details

Forgot password? Click here to reset