Fast Training of Provably Robust Neural Networks by SingleProp

02/01/2021
by   Akhilan Boopathy, et al.
7

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. However, these techniques can be computationally costly due to the use of certification during training. We develop a new regularizer that is both more efficient than existing certified defenses, requiring only one additional forward propagation through a network, and can be used to train networks with similar certified accuracy. Through experiments on MNIST and CIFAR-10 we demonstrate improvements in training speed and comparable certified accuracy compared to state-of-the-art certified defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2021

Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks

Adversarial attacks optimize against models to defeat defenses. Existing...
research
11/06/2018

MixTrain: Scalable Training of Formally Robust Neural Networks

There is an arms race to defend neural networks against adversarial exam...
research
12/12/2021

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses

Adversarial training (AT) is considered to be one of the most reliable d...
research
01/29/2018

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image clas...
research
05/17/2023

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

Certified defenses against adversarial attacks offer formal guarantees o...
research
10/24/2018

Toward Robust Neural Networks via Sparsification

It is by now well-known that small adversarial perturbations can induce ...
research
11/27/2019

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...

Please sign up or login with your details

Forgot password? Click here to reset