Block Switching: A Stochastic Approach for Deep Learning Security

02/18/2020
by   Xiao Wang, et al.
8

Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2019

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Despite achieving remarkable success in various domains, recent studies ...
research
05/17/2019

A critique of the DeepSec Platform for Security Analysis of Deep Learning Models

At IEEE S&P 2019, the paper "DeepSec: A Uniform Platform for Security An...
research
03/11/2022

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

According to recent studies, the vulnerability of state-of-the-art Neura...
research
09/16/2021

Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications

We consider adversarial machine learning based attacks on power allocati...
research
10/08/2021

Game Theory for Adversarial Attacks and Defenses

Adversarial attacks can generate adversarial inputs by applying small bu...
research
02/24/2022

Towards Effective and Robust Neural Trojan Defenses via Input Filtering

Trojan attacks on deep neural networks are both dangerous and surreptiti...
research
10/09/2018

The Adversarial Attack and Detection under the Fisher Information Metric

Many deep learning models are vulnerable to the adversarial attack, i.e....

Please sign up or login with your details

Forgot password? Click here to reset