Globally-Robust Neural Networks

by   Klas Leino, et al.

The threat of adversarial examples has motivated work on training certifiably robust neural networks, to facilitate efficient verification of local robustness at inference time. We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification while yielding a natural learning objective for robust training. We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network, yielding certifiably-robust models by construction that achieve state-of-the-art verifiable and clean accuracy. Notably, this approach requires significantly less time and memory than recent certifiable training methods, and leads to negligible costs when certifying points on-line; for example, our evaluation shows that it is possible to train a large tiny-imagenet model in a matter of hours. We posit that this is possible using inexpensive global bounds – despite prior suggestions that tighter local bounds are needed for good performance – because these models are trained to achieve tighter global bounds. Namely, we prove that the maximum achievable verifiable accuracy for a given dataset is not improved by using a local bound.



There are no comments yet.


page 1

page 4

page 12


Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Certified robustness is a desirable property for deep neural networks in...

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...

Verification of Neural Networks: Specifying Global Robustness using Generative Models

The success of neural networks across most machine learning tasks and th...

VarMixup: Exploiting the Latent Space for Robust Training and Inference

The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks ...

MixTrain: Scalable Training of Formally Robust Neural Networks

There is an arms race to defend neural networks against adversarial exam...

Improving Local Effectiveness for Global robust training

Despite its popularity, deep neural networks are easily fooled. To allev...

Code Repositories


Library for training globally-robust neural networks.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.