Exploiting Certified Defences to Attack Randomised Smoothing

02/09/2023
by   Andrew C. Cullen, et al.
0

In guaranteeing that no adversarial examples exist within a bounded region, certification mechanisms play an important role in neural network robustness. Concerningly, this work demonstrates that the certification mechanisms themselves introduce a new, heretofore undiscovered attack surface, that can be exploited by attackers to construct smaller adversarial perturbations. While these attacks exist outside the certification region in no way invalidate certifications, minimising a perturbation's norm significantly increases the level of difficulty associated with attack detection. In comparison to baseline attacks, our new framework yields smaller perturbations more than twice as frequently as any other approach, resulting in an up to 34 % reduction in the median perturbation norm. That this approach also requires 90 % less computational time than approaches like PGD. That these reductions are possible suggests that exploiting this new attack vector would allow attackers to more frequently construct hard to detect adversarial attacks, by exploiting the very systems designed to defend deployed models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2022

AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification

Adversarial attacks pose a severe security threat to the state-of-the-ar...
research
06/01/2022

Attack-Agnostic Adversarial Detection

The growing number of adversarial attacks in recent years gives attacker...
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
05/08/2020

Towards Robustness against Unsuspicious Adversarial Examples

Despite the remarkable success of deep neural networks, significant conc...
research
07/07/2020

Regional Image Perturbation Reduces L_p Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Regional adversarial attacks often rely on complicated methods for gener...
research
02/13/2019

The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

We investigate conditions under which test statistics exist that can rel...
research
06/04/2021

BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization

Decision-based attacks (DBA), wherein attackers perturb inputs to spoof ...

Please sign up or login with your details

Forgot password? Click here to reset