Robustness of classifiers to uniform ℓ_p and Gaussian noise

02/22/2018
by   Jean-Yves Franceschi, et al.
0

We study the robustness of classifiers to various kinds of random noise models. In particular, we consider noise drawn uniformly from the ℓ_p ball for p ∈ [1, ∞] and Gaussian noise with an arbitrary covariance matrix. We characterize this robustness to random noise in terms of the distance to the decision boundary of the classifier. This analysis applies to linear classifiers as well as classifiers with locally approximately flat decision boundaries, a condition which is satisfied by state-of-the-art deep neural networks. The predicted robustness is verified experimentally.

READ FULL TEXT

page 4

page 7

page 8

page 12

research
08/31/2016

Robustness of classifiers: from adversarial to random noise

Several recent works have shown that state-of-the-art classifiers are vu...
research
04/05/2020

Dynamic Decision Boundary for One-class Classifiers applied to non-uniformly Sampled Data

A typical issue in Pattern Recognition is the non-uniformly sampled data...
research
07/14/2020

Explicit Regularisation in Gaussian Noise Injections

We study the regularisation induced in neural networks by Gaussian noise...
research
12/18/2022

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

Any classifier can be "smoothed out" under Gaussian noise to build a new...
research
02/14/2020

Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images

We show a hardness result for random smoothing to achieve certified adve...
research
05/15/2020

Recovering Data Permutation from Noisy Observations: The Linear Regime

This paper considers a noisy data structure recovery problem. The goal i...
research
02/10/2020

Random Smoothing Might be Unable to Certify ℓ_∞ Robustness for High-Dimensional Images

We show a hardness result for random smoothing to achieve certified adve...

Please sign up or login with your details

Forgot password? Click here to reset