Globally-Robust Neural Networks

02/16/2021
by   Klas Leino, et al.
14

The threat of adversarial examples has motivated work on training certifiably robust neural networks, to facilitate efficient verification of local robustness at inference time. We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification while yielding a natural learning objective for robust training. We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network, yielding certifiably-robust models by construction that achieve state-of-the-art verifiable and clean accuracy. Notably, this approach requires significantly less time and memory than recent certifiable training methods, and leads to negligible costs when certifying points on-line; for example, our evaluation shows that it is possible to train a large tiny-imagenet model in a matter of hours. We posit that this is possible using inexpensive global bounds – despite prior suggestions that tighter local bounds are needed for good performance – because these models are trained to achieve tighter global bounds. Namely, we prove that the maximum achievable verifiable accuracy for a given dataset is not improved by using a local bound.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 12

11/02/2021

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Certified robustness is a desirable property for deep neural networks in...
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
10/11/2019

Verification of Neural Networks: Specifying Global Robustness using Generative Models

The success of neural networks across most machine learning tasks and th...
03/14/2020

VarMixup: Exploiting the Latent Space for Robust Training and Inference

The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks ...
11/06/2018

MixTrain: Scalable Training of Formally Robust Neural Networks

There is an arms race to defend neural networks against adversarial exam...
10/26/2021

Improving Local Effectiveness for Global robust training

Despite its popularity, deep neural networks are easily fooled. To allev...

Code Repositories

gloro

Library for training globally-robust neural networks.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.