Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

08/20/2019
by   Xiao Wang, et al.
4

Despite achieving remarkable success in various domains, recent studies have uncovered the vulnerability of deep neural networks to adversarial perturbations, creating concerns on model generalizability and new threats such as prediction-evasive misclassification or stealthy reprogramming. Among different defense proposals, stochastic network defenses such as random neuron activation pruning or random perturbation to layer inputs are shown to be promising for attack mitigation. However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e.g., large drop in test accuracy. This paper is motivated by pursuing for a better trade-off between adversarial robustness and test accuracy for stochastic network defenses. We propose Defense Efficiency Score (DES), a comprehensive metric that measures the gain in unsuccessful attack attempts at the cost of drop in test accuracy of any defense. To achieve a better DES, we propose hierarchical random switching (HRS), which protects neural networks through a novel randomization scheme. A HRS-protected model contains several blocks of randomly switching channels to prevent adversaries from exploiting fixed model structures and parameters for their malicious purposes. Extensive experiments show that HRS is superior in defending against state-of-the-art white-box and adaptive adversarial misclassification attacks. We also demonstrate the effectiveness of HRS in defending adversarial reprogramming, which is the first defense against adversarial programs. Moreover, in most settings the average DES of HRS is at least 5X higher than current stochastic network defenses, validating its significantly improved robustness-accuracy trade-off.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2020

Block Switching: A Stochastic Approach for Deep Learning Security

Recent study of adversarial attacks has revealed the vulnerability of mo...
research
07/08/2021

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a va...
research
05/10/2019

Interpreting and Evaluating Neural Network Robustness

Recently, adversarial deception becomes one of the most considerable thr...
research
02/19/2020

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

Designing effective defense against adversarial attacks is a crucial top...
research
03/11/2022

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

According to recent studies, the vulnerability of state-of-the-art Neura...
research
06/19/2022

On the Limitations of Stochastic Pre-processing Defenses

Defending against adversarial examples remains an open problem. A common...
research
05/23/2023

Adversarial Defenses via Vector Quantization

Building upon Randomized Discretization, we develop two novel adversaria...

Please sign up or login with your details

Forgot password? Click here to reset