Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

05/29/2019
by   Saeed Mahloujifar, et al.
2

Many recent works have shown that adversarial examples that fool classifiers can be found by minimally perturbing a normal input. Recent theoretical results, starting with Gilmer et al. (2018), show that if the inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable. A concentrated space has the property that any subset with Ω(1) (e.g., 1/100) measure, according to the imposed distribution, has small distance to almost all (e.g., 99/100) of the points in the space. It is not clear, however, whether these theoretical results apply to actual distributions such as images. This paper presents a method for empirically measuring and bounding the concentration of a concrete dataset which is proven to converge to the actual concentration. We use it to empirically estimate the intrinsic robustness to ℓ_∞ and ℓ_2 perturbations of several image classification benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2020

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

Starting with Gilmer et al. (2018), several works have demonstrated the ...
research
03/24/2021

Improved Estimation of Concentration Under ℓ_p-Norm Distance Metrics Using Half Spaces

Concentration of measure has been argued to be the fundamental cause of ...
research
09/08/2018

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Adversarial examples are perturbed inputs designed to fool machine learn...
research
09/09/2018

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Many modern machine learning classifiers are shown to be vulnerable to a...
research
10/29/2018

Logit Pairing Methods Can Fool Gradient-Based Attacks

Recently, several logit regularization methods have been proposed in [Ka...
research
07/14/2020

Towards a Theoretical Understanding of the Robustness of Variational Autoencoders

We make inroads into understanding the robustness of Variational Autoenc...
research
09/22/2019

Minimal Learning Machine: Theoretical Results and Clustering-Based Reference Point Selection

The Minimal Learning Machine (MLM) is a nonlinear supervised approach ba...

Please sign up or login with your details

Forgot password? Click here to reset