DeepAI
Log In Sign Up

Certified Training: Small Boxes are All You Need

10/10/2022
by   Mark Niklas Müller, et al.
1

We propose the novel certified training method, SABR, which outperforms existing methods across perturbation magnitudes on MNIST, CIFAR-10, and TinyImageNet, in terms of both standard and certifiable accuracies. The key insight behind SABR is that propagating interval bounds for a small but carefully selected subset of the adversarial input region is sufficient to approximate the worst-case loss over the whole region while significantly reducing approximation errors. SABR does not only establish a new state-of-the-art in all commonly used benchmarks but more importantly, points to a new class of certified training methods promising to overcome the robustness-accuracy trade-off.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/18/2018

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

With deep neural networks providing state-of-the-art machine learning mo...
10/06/2022

Towards Out-of-Distribution Adversarial Robustness

Adversarial robustness continues to be a major challenge for deep learni...
06/10/2021

An Ensemble Approach Towards Adversarial Robustness

It is a known phenomenon that adversarial robustness comes at a cost to ...
03/24/2021

Adversarial Feature Stacking for Accurate and Robust Predictions

Deep Neural Networks (DNNs) have achieved remarkable performance on a va...
01/24/2019

Theoretically Principled Trade-off between Robustness and Accuracy

We identify a trade-off between robustness and accuracy that serves as a...
07/21/2022

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Defending deep neural networks against adversarial examples is a key cha...
04/24/2018

SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach

The SimpleQuestions dataset is one of the most commonly used benchmarks ...