Provable Adversarial Robustness for Fractional Lp Threat Models

03/16/2022
by   Alexander Levine, et al.
0

In recent years, researchers have extensively studied adversarial robustness in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm bounded adversarial attacks. However, attacks bounded by fractional L_p "norms" (quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly considered. We proactively propose a defense with several desirable properties: it provides provable (certified) robustness, scales to ImageNet, and yields deterministic (rather than high-probability) certified guarantees when applied to quantized data (e.g., images). Our technique for fractional L_p robustness constructs expressive, deep classifiers that are globally Lipschitz with respect to the L_p^p metric, for any 0<p<1. However, our method is even more general: we can construct classifiers which are globally Lipschitz with respect to any metric defined as the sum of concave functions of components. Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks. However, we demonstrate that our proposed guarantees are highly non-vacuous, compared to the trivial solution of using (Levine and Feizi, 2021) directly and applying norm inequalities. Code is available at https://github.com/alevine0/fractionalLpRobustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2019

Certified Adversarial Robustness via Randomized Smoothing

Recent work has shown that any classifier which classifies well under Ga...
research
10/23/2019

Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks

In the last couple of years, several adversarial attack methods based on...
research
09/17/2020

Large Norms of CNN Layers Do Not Hurt Adversarial Robustness

Since the Lipschitz properties of convolutional neural network (CNN) are...
research
04/20/2022

GUARD: Graph Universal Adversarial Defense

Recently, graph convolutional networks (GCNs) have shown to be vulnerabl...
research
05/09/2023

Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions

Robustness is a fundamental property of machine learning classifiers to ...
research
01/22/2021

On the robustness of certain norms

We study a family of norms defined for functions on an interval. These n...
research
12/17/2021

Provable Adversarial Robustness in the Quantum Model

Modern machine learning systems have been applied successfully to a vari...

Please sign up or login with your details

Forgot password? Click here to reset