Robust Universal Adversarial Perturbations

06/22/2022
by   Changming Xu, et al.
0

Universal Adversarial Perturbations (UAPs) are imperceptible, image-agnostic vectors that cause deep neural networks (DNNs) to misclassify inputs from a data distribution with high probability. Existing methods do not create UAPs robust to transformations, thereby limiting their applicability as a real-world attacks. In this work, we introduce a new concept and formulation of robust universal adversarial perturbations. Based on our formulation, we build a novel, iterative algorithm that leverages probabilistic robustness bounds for generating UAPs robust against transformations generated by composing arbitrary sub-differentiable transformation functions. We perform an extensive evaluation on the popular CIFAR-10 and ILSVRC 2012 datasets measuring robustness under human-interpretable semantic transformations, such as rotation, contrast changes, etc, that are common in the real-world. Our results show that our generated UAPs are significantly more robust than those from baselines.

READ FULL TEXT

page 2

page 7

page 9

research
12/31/2021

On Distinctive Properties of Universal Perturbations

We identify properties of universal adversarial perturbations (UAPs) tha...
research
09/11/2017

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has ...
research
07/22/2022

Training Certifiably Robust Neural Networks Against Semantic Perturbations

Semantic image perturbations, such as scaling and rotation, have been sh...
research
07/31/2022

Is current research on adversarial robustness addressing the right problem?

Short answer: Yes, Long answer: No! Indeed, research on adversarial robu...
research
02/03/2023

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

Machine learning models are vulnerable to adversarial perturbations, and...
research
09/22/2021

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

In safety-critical machine learning applications, it is crucial to defen...
research
09/07/2018

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...

Please sign up or login with your details

Forgot password? Click here to reset