The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

09/09/2018
by   Saeed Mahloujifar, et al.
0

Many modern machine learning classifiers are shown to be vulnerable to adversarial perturbations of the instances that can "evade" the classifier and get misclassified. Despite a massive amount of work focusing on making classifiers robust, the task seems quite challenging. In this work, through a theoretical study, we investigate the adversarial risk and robustness of classifiers and draw a connection to the well-known phenomenon of "concentration of measure" in metric measure spaces. We show that if the metric probability space of the test instance is concentrated, any classifier with some initial constant error is inherently vulnerable to adversarial perturbations. One class of concentrated metric probability spaces are the so-called Levy families that include many natural distributions. In this special case, our attacks only need to perturb the test instance by at most O(√(n)) to make it misclassified, where n is the data dimension. Using our general result about Levy instance spaces, we first recover as special case some of the previously proved results about the existence of adversarial examples. However, many more Levy families are known for which we immediately obtain new attacks finding adversarial examples (e.g., product distribution under the Hamming distance). Finally, we show that concentration of measure for product spaces implies the existence of so called "poisoning" attacks in which the adversary tampers with the training data with the goal of increasing the error of the classifier. We show that for any deterministic learning algorithm that uses m training examples, there is an adversary who substitutes O(√(m)) of the examples with other (still correctly labeled) ones and can almost fully degrade the confidence parameter of any PAC learning algorithm or alternatively increase the risk to almost 1 if the adversary also knows the final test example.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
06/13/2019

Lower Bounds for Adversarially Robust PAC Learning

In this work, we initiate a formal study of probably approximately corre...
research
03/04/2020

Metrics and methods for robustness evaluation of neural networks with generative models

Recent studies have shown that modern deep neural network classifiers ar...
research
11/10/2017

Learning under p-Tampering Attacks

Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algor...
research
05/29/2019

Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

Many recent works have shown that adversarial examples that fool classif...
research
07/10/2020

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

We present a transductive learning algorithm that takes as input trainin...
research
05/25/2018

Adversarial examples from computational constraints

Why are classifiers in high dimension vulnerable to "adversarial" pertur...

Please sign up or login with your details

Forgot password? Click here to reset