Raising the Bar for Certified Adversarial Robustness with Diffusion Models

05/17/2023
by   Thomas Altstidl, et al.
0

Certified defenses against adversarial attacks offer formal guarantees on the robustness of a model, making them more reliable than empirical methods such as adversarial training, whose effectiveness is often later reduced by unseen attacks. Still, the limited certified robustness that is currently achievable has been a bottleneck for their practical adoption. Gowal et al. and Wang et al. have shown that generating additional training data using state-of-the-art diffusion models can considerably improve the robustness of adversarial training. In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses. In addition, we provide a list of recommendations to scale the robustness of certified training approaches. One of our main insights is that the generalization gap, i.e., the difference between the training and test accuracy of the original model, is a good predictor of the magnitude of the robustness improvement when using additional generated data. Our approach achieves state-of-the-art deterministic robustness certificates on CIFAR-10 for the ℓ_2 (ϵ = 36/255) and ℓ_∞ (ϵ = 8/255) threat models, outperforming the previous best results by +3.95% and +1.39%, respectively. Furthermore, we report similar improvements for CIFAR-100.

READ FULL TEXT

page 5

page 13

research
12/12/2021

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses

Adversarial training (AT) is considered to be one of the most reliable d...
research
03/02/2021

Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial Training

In this technical report, we evaluate the adversarial robustness of a ve...
research
04/29/2019

Adversarial Training for Free!

Adversarial training, in which a network is trained on adversarial examp...
research
02/08/2020

An Empirical Evaluation of Perturbation-based Defenses

Recent work has extensively shown that randomized perturbations of a neu...
research
09/20/2023

It's Simplex! Disaggregating Measures to Improve Certified Robustness

Certified robustness circumvents the fragility of defences against adver...
research
02/01/2021

Fast Training of Provably Robust Neural Networks by SingleProp

Recent works have developed several methods of defending neural networks...
research
05/31/2021

Adversarial Training with Rectified Rejection

Adversarial training (AT) is one of the most effective strategies for pr...

Please sign up or login with your details

Forgot password? Click here to reset