Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

11/02/2021
by   Yujia Huang, et al.
10

Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable local Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2021

LipBaB: Computing exact Lipschitz constant of ReLU networks

The Lipschitz constant of neural networks plays an important role in sev...
research
09/30/2020

A law of robustness for two-layers neural networks

We initiate the study of the inherent tradeoffs between the size of a ne...
research
07/02/2020

Efficient Proximal Mapping of the 1-path-norm of Shallow Networks

We demonstrate two new important properties of the 1-path-norm of shallo...
research
05/06/2020

Training robust neural networks using Lipschitz bounds

Due to their susceptibility to adversarial perturbations, neural network...
research
04/13/2018

Representing smooth functions as compositions of near-identity functions with implications for deep network optimization

We show that any smooth bi-Lipschitz h can be represented exactly as a c...
research
04/29/2021

Analytical bounds on the local Lipschitz constants of ReLU networks

In this paper, we determine analytical upper bounds on the local Lipschi...
research
12/27/2021

Sparsest Univariate Learning Models Under Lipschitz Constraint

Beside the minimization of the prediction error, two of the most desirab...

Please sign up or login with your details

Forgot password? Click here to reset