Adversarial Vulnerability of Randomized Ensembles

06/14/2022
by   Hassan Dbouk, et al.
0

Despite the tremendous success of deep neural networks across various tasks, their vulnerability to imperceptible adversarial perturbations has hindered their deployment in the real world. Recently, works on randomized ensembles have empirically demonstrated significant improvements in adversarial robustness over standard adversarially trained (AT) models with minimal computational overhead, making them a promising solution for safety-critical resource-constrained applications. However, this impressive performance raises the question: Are these robustness gains provided by randomized ensembles real? In this work we address this question both theoretically and empirically. We first establish theoretically that commonly employed robustness evaluation methods such as adaptive PGD provide a false sense of security in this setting. Subsequently, we propose a theoretically-sound and efficient adversarial attack algorithm (ARC) capable of compromising random ensembles even in cases where adaptive PGD fails to do so. We conduct comprehensive experiments across a variety of network architectures, training schemes, datasets, and norms to support our claims, and empirically establish that randomized ensembles are in fact more vulnerable to ℓ_p-bounded adversarial perturbations than even standard AT models. Our code can be found at https://github.com/hsndbk4/ARC.

READ FULL TEXT

page 20

page 24

page 25

page 26

research
12/20/2019

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

It is well-known that classifiers are vulnerable to adversarial perturba...
research
02/02/2023

On the Robustness of Randomized Ensembles to Adversarial Perturbations

Randomized ensemble classifiers (RECs), where one classifier is randomly...
research
10/28/2021

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks

Despite their tremendous successes, convolutional neural networks (CNNs)...
research
04/19/2022

Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks

Deep neural networks have become an integral part of our software infras...
research
11/17/2019

Smoothed Inference for Adversarially-Trained Models

Deep neural networks are known to be vulnerable to inputs with malicious...
research
11/30/2022

Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations

Multi-instance learning (MIL) is a great paradigm for dealing with compl...
research
07/05/2022

PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

In safety-critical deep learning applications robustness measurement is ...

Please sign up or login with your details

Forgot password? Click here to reset