Scalable Lipschitz Residual Networks with Convex Potential Flows

10/25/2021
by   Laurent Meunier, et al.
0

The Lipschitz constant of neural networks has been established as a key property to enforce the robustness of neural networks to adversarial examples. However, recent attempts to build 1-Lipschitz Neural Networks have all shown limitations and robustness have to be traded for accuracy and scalability or vice versa. In this work, we first show that using convex potentials in a residual network gradient flow provides a built-in 1-Lipschitz transformation. From this insight, we leverage the work on Input Convex Neural Networks to parametrize efficient layers with this property. A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for ℓ_2 provable defenses. Indeed, we train very deep and wide neural networks (up to 1000 layers) and reach state-of-the-art results in terms of standard and certified accuracy, along with empirical robustness, in comparison with other 1-Lipschitz architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2018

Limitations of the Lipschitz constant as a defense against adversarial examples

Several recent papers have discussed utilizing Lipschitz constants to li...
research
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
research
04/28/2017

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which ...
research
01/29/2023

Scaling in Depth: Unlocking Robustness Certification on ImageNet

Notwithstanding the promise of Lipschitz-based approaches to determinist...
research
03/06/2023

A Unified Algebraic Perspective on Lipschitz Neural Networks

Important research efforts have focused on the design and training of ne...
research
07/06/2021

Provable Lipschitz Certification for Generative Models

We present a scalable technique for upper bounding the Lipschitz constan...
research
02/16/2021

A Law of Robustness for Weight-bounded Neural Networks

Robustness of deep neural networks against adversarial perturbations is ...

Please sign up or login with your details

Forgot password? Click here to reset