It's Simplex! Disaggregating Measures to Improve Certified Robustness

09/20/2023
by   Andrew C. Cullen, et al.
0

Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size. While there is value in these certifications, the techniques through which we assess their performance do not present a proper accounting of their strengths and weaknesses, as their analysis has eschewed consideration of performance over individual samples in favour of aggregated measures. By considering the potential output space of certified models, this work presents two distinct approaches to improve the analysis of certification mechanisms, that allow for both dataset-independent and dataset-dependent measures of certification performance. Embracing such a perspective uncovers new certification approaches, which have the potential to more than double the achievable radius of certification, relative to current state-of-the-art. Empirical evaluation verifies that our new approach can certify 9% more samples at noise scale σ = 1, with greater relative improvements observed as the difficulty of the predictive task increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/25/2022

Can Rationalization Improve Robustness?

A growing line of work has investigated the development of neural NLP mo...
research
02/01/2019

Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks

Adversarial attacks and the development of (deep) neural networks robust...
research
12/10/2022

Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking

Deep Reinforcement Learning (RL) agents are susceptible to adversarial n...
research
05/17/2023

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

Certified defenses against adversarial attacks offer formal guarantees o...
research
08/15/2023

Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks

Poisoning attacks can disproportionately influence model behaviour by ma...
research
07/13/2021

Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors

Deep learning models have been shown to be vulnerable to adversarial att...
research
03/30/2023

Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness

Neural networks have been proven to be both highly effective within comp...

Please sign up or login with your details

Forgot password? Click here to reset