Limitations of the Lipschitz constant as a defense against adversarial examples

07/25/2018
by   Todd Huster, et al.
0

Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples. We analyze recently proposed methods for computing the Lipschitz constant. We show that the Lipschitz constant may indeed enable adversarially robust neural networks. However, the methods currently employed for computing it suffer from theoretical and practical limitations. We argue that addressing this shortcoming is a promising direction for future research into certified adversarial defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2021

Scalable Lipschitz Residual Networks with Convex Potential Flows

The Lipschitz constant of neural networks has been established as a key ...
research
11/20/2018

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

How can we make machine learning provably robust against adversarial exa...
research
03/23/2021

CLIP: Cheap Lipschitz Training of Neural Networks

Despite the large success of deep neural networks (DNN) in recent years,...
research
01/21/2023

Limitations of Piecewise Linearity for Efficient Robustness Certification

Certified defenses against small-norm adversarial examples have received...
research
10/04/2022

Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness

Designing neural networks with bounded Lipschitz constant is a promising...
research
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
research
11/30/2021

Robust and Provably Monotonic Networks

The Lipschitz constant of the map between the input and output space rep...

Please sign up or login with your details

Forgot password? Click here to reset