Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

02/22/2020
by   Chen Zhu, et al.
5

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness. In principle, convex relaxation can provide tight bounds if the solution to the relaxed problem is feasible for the original non-convex problem. We propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness. In all of our experiments, the proposed regularizers result in higher certified accuracy than non-regularized baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2019

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Verification of neural networks enables us to gauge their robustness aga...
research
02/23/2019

A Convex Relaxation Barrier to Tight Robust Verification of Neural Networks

Verification of neural networks enables us to gauge their robustness aga...
research
06/14/2019

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

Training neural networks with verifiable robustness guarantees is challe...
research
09/30/2019

Universal Approximation with Certified Networks

Training neural networks to be certifiably robust is a powerful defense ...
research
10/16/2020

Strengthened SDP Verification of Neural Network Robustness via Non-Convex Cuts

There have been major advances on the design of neural networks, but sti...
research
04/01/2020

Tightened Convex Relaxations for Neural Network Robustness Certification

In this paper, we consider the problem of certifying the robustness of n...
research
06/06/2021

A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification

The existence of adversarial examples poses a real danger when deep neur...

Please sign up or login with your details

Forgot password? Click here to reset