A Geometric Perspective on the Transferability of Adversarial Directions

11/08/2018
by   Zachary Charles, et al.
0

State-of-the-art machine learning models frequently misclassify inputs that have been perturbed in an adversarial manner. Adversarial perturbations generated for a given input and a specific classifier often seem to be effective on other inputs and even different classifiers. In other words, adversarial perturbations seem to transfer between different inputs, models, and even different neural network architectures. In this work, we show that in the context of linear classifiers and two-layer ReLU networks, there provably exist directions that give rise to adversarial perturbations for many classifiers and data points simultaneously. We show that these "transferable adversarial directions" are guaranteed to exist for linear separators of a given set, and will exist with high probability for linear classifiers trained on independent sets drawn from the same distribution. We extend our results to large classes of two-layer ReLU networks. We further show that adversarial directions for ReLU networks transfer to linear classifiers while the reverse need not hold, suggesting that adversarial perturbations for more complex models are more likely to transfer to other classifiers. We validate our findings empirically, even for deeper ReLU networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2023

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds

Despite a great deal of research, it is still not well-understood why tr...
research
10/08/2019

SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations

Deep neural networks are susceptible to adversarial manipulations in the...
research
06/06/2020

Unique properties of adversarially trained linear classifiers on Gaussian data

Machine learning models are vulnerable to adversarial perturbations, tha...
research
09/07/2018

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...
research
05/31/2018

Scaling provable adversarial defenses

Recent work has developed methods for learning deep network classifiers ...
research
01/13/2021

Image Steganography based on Iteratively Adversarial Samples of A Synchronized-directions Sub-image

Nowadays a steganography has to face challenges of both feature based st...
research
03/25/2021

The Geometry of Over-parameterized Regression and Adversarial Perturbations

Classical regression has a simple geometric description in terms of a pr...

Please sign up or login with your details

Forgot password? Click here to reset