DeepAI AI Chat
Log In Sign Up

Cross-Layer Strategic Ensemble Defense Against Adversarial Examples

10/01/2019
by   Wenqi Wei, et al.
Georgia Institute of Technology
9

Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to misclassify. In this paper, we present a cross-layer strategic ensemble framework and a suite of robust defense algorithms, which are attack-independent, and capable of auto-repairing and auto-verifying the target model being attacked. Our strategic ensemble approach makes three original contributions. First, we employ input-transformation diversity to design the input-layer strategic transformation ensemble algorithms. Second, we utilize model-disagreement diversity to develop the output-layer strategic model ensemble algorithms. Finally, we create an input-output cross-layer strategic ensemble defense that strengthens the defensibility by combining diverse input transformation based model ensembles with diverse output verification model ensembles. Evaluated over 10 attacks on ImageNet dataset, we show that our strategic ensemble defense algorithms can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false negative rates, compared to existing representative defense methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

09/14/2020

Robust Deep Learning Ensemble against Deception

Deep neural network (DNN) models are known to be vulnerable to malicious...
08/21/2019

Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Deep neural networks (DNNs) have demonstrated impressive performance on ...
08/06/2019

BlurNet: Defense by Filtering the Feature Maps

Recently, the field of adversarial machine learning has been garnering a...
06/11/2022

Defending Adversarial Examples by Negative Correlation Ensemble

The security issues in DNNs, such as adversarial examples, have attracte...
06/20/2022

Diversified Adversarial Attacks based on Conjugate Gradient Method

Deep learning models are vulnerable to adversarial examples, and adversa...
02/19/2018

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

The rapidly growing body of research in adversarial machine learning has...
04/18/2022

Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge

Adversarial example attack endangers the mobile edge systems such as veh...