Certified Training: Small Boxes are All You Need

10/10/2022
by   Mark Niklas Müller, et al.
1

We propose the novel certified training method, SABR, which outperforms existing methods across perturbation magnitudes on MNIST, CIFAR-10, and TinyImageNet, in terms of both standard and certifiable accuracies. The key insight behind SABR is that propagating interval bounds for a small but carefully selected subset of the adversarial input region is sufficient to approximate the worst-case loss over the whole region while significantly reducing approximation errors. SABR does not only establish a new state-of-the-art in all commonly used benchmarks but more importantly, points to a new class of certified training methods promising to overcome the robustness-accuracy trade-off.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2023

TAPS: Connecting Certified and Adversarial Training

Training certifiably robust neural networks remains a notoriously hard p...
research
12/18/2018

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

With deep neural networks providing state-of-the-art machine learning mo...
research
10/06/2022

Towards Out-of-Distribution Adversarial Robustness

Adversarial robustness continues to be a major challenge for deep learni...
research
07/24/2023

Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs

As deep learning models continue to advance and are increasingly utilize...
research
06/10/2021

An Ensemble Approach Towards Adversarial Robustness

It is a known phenomenon that adversarial robustness comes at a cost to ...
research
05/23/2023

Expressive Losses for Verified Robustness via Convex Combinations

In order to train networks for verified adversarial robustness, previous...
research
06/17/2023

Understanding Certified Training with Interval Bound Propagation

As robustness verification methods are becoming more precise, training c...

Please sign up or login with your details

Forgot password? Click here to reset