Certified Defenses: Why Tighter Relaxations May Hurt Training?

02/12/2021
by   Nikola Jovanović, et al.
0

Certified defenses based on convex relaxations are an established technique for training provably robust models. The key component is the choice of relaxation, varying from simple intervals to tight polyhedra. Paradoxically, however, it was empirically observed that training with tighter relaxations can worsen certified robustness. While several methods were designed to partially mitigate this issue, the underlying causes are poorly understood. In this work we investigate the above phenomenon and show that tightness may not be the determining factor for reduced certified robustness. Concretely, we identify two key features of relaxations that impact training dynamics: continuity and sensitivity. We then experimentally demonstrate that these two factors explain the drop in certified robustness when using popular relaxations. Further, we show, for the first time, that it is possible to successfully train with tighter relaxations (i.e., triangle), a result supported by our two properties. Overall, we believe the insights of this work can help drive the systematic discovery of new effective certified defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2022

Not all broken defenses are equal: The dead angles of adversarial accuracy

Robustness to adversarial attack is typically evaluated with adversarial...
research
01/24/2021

A Comprehensive Evaluation Framework for Deep Model Robustness

Deep neural networks (DNNs) have achieved remarkable performance across ...
research
02/28/2022

Evaluating the Adversarial Robustness of Adaptive Test-time Defenses

Adaptive defenses that use test-time optimization promise to improve rob...
research
02/08/2020

An Empirical Evaluation of Perturbation-based Defenses

Recent work has extensively shown that randomized perturbations of a neu...
research
01/31/2023

Are Defenses for Graph Neural Networks Robust?

A cursory reading of the literature suggests that we have made a lot of ...
research
08/25/2021

Backdoor Attacks on Network Certification via Data Poisoning

Certifiers for neural networks have made great progress towards provable...

Please sign up or login with your details

Forgot password? Click here to reset