Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

10/12/2022
by   Andrew C. Cullen, et al.
0

In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution. There, invariance of predictions to all norm-bounded attacks is achieved through randomised smoothing of network inputs. Today's state-of-the-art certifications make optimal use of the class output scores at the input instance under test: no better radius of certification (under the L_2 norm) is possible given only these score. However, it is an open question as to whether such lower bounds can be improved using local information around the instance under test. In this work, we demonstrate how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space, giving rise to what we term Geometrically-Informed Certified Robustness. By considering the smallest distance to points on the boundary of a set of certifications this approach improves certifications for more than 80% of Tiny-Imagenet instances, yielding an on average 5 % increase in the associated certification. When incorporating training time processes that enhance the certified radius, our technique shows even more promising results, with a uniform 4 percentage point increase in the achieved certified radius.

READ FULL TEXT

page 8

page 17

research
02/08/2020

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

Randomized smoothing, using just a simple isotropic Gaussian distributio...
research
05/20/2022

Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness

Adversarial examples pose a security risk as they can alter a classifier...
research
02/09/2020

Input Validation for Neural Networks via Runtime Local Robustness Verification

Local robustness verification can verify that a neural network is robust...
research
09/23/2020

Random points are optimal for the approximation of Sobolev functions

We show that independent and uniformly distributed sampling points are a...
research
02/16/2021

Globally-Robust Neural Networks

The threat of adversarial examples has motivated work on training certif...
research
08/28/2023

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

Diffusion models have been leveraged to perform adversarial purification...
research
04/06/2023

Reliable Learning for Test-time Attacks and Distribution Shift

Machine learning algorithms are often used in environments which are not...

Please sign up or login with your details

Forgot password? Click here to reset