DeepAI AI Chat
Log In Sign Up

Globally-Robust Neural Networks

02/16/2021
by   Klas Leino, et al.
14

The threat of adversarial examples has motivated work on training certifiably robust neural networks, to facilitate efficient verification of local robustness at inference time. We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification while yielding a natural learning objective for robust training. We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network, yielding certifiably-robust models by construction that achieve state-of-the-art verifiable and clean accuracy. Notably, this approach requires significantly less time and memory than recent certifiable training methods, and leads to negligible costs when certifying points on-line; for example, our evaluation shows that it is possible to train a large tiny-imagenet model in a matter of hours. We posit that this is possible using inexpensive global bounds – despite prior suggestions that tighter local bounds are needed for good performance – because these models are trained to achieve tighter global bounds. Namely, we prove that the maximum achievable verifiable accuracy for a given dataset is not improved by using a local bound.

READ FULL TEXT

page 1

page 4

page 12

08/15/2022

A Tool for Neural Network Global Robustness Certification and Training

With the increment of interest in leveraging machine learning technology...
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
10/11/2019

Verification of Neural Networks: Specifying Global Robustness using Generative Models

The success of neural networks across most machine learning tasks and th...
03/14/2020

VarMixup: Exploiting the Latent Space for Robust Training and Inference

The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks ...
11/06/2018

MixTrain: Scalable Training of Formally Robust Neural Networks

There is an arms race to defend neural networks against adversarial exam...
10/12/2022

Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

In response to subtle adversarial examples flipping classifications of n...
01/29/2023

Scaling in Depth: Unlocking Robustness Certification on ImageNet

Notwithstanding the promise of Lipschitz-based approaches to determinist...

Code Repositories

gloro

Library for training globally-robust neural networks.


view repo