Robust Deep Learning Ensemble against Deception

09/14/2020
by   Wenqi Wei, et al.
0

Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This paper presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using eleven popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.

READ FULL TEXT

page 2

page 6

page 8

research
10/01/2019

Cross-Layer Strategic Ensemble Defense Against Adversarial Examples

Deep neural network (DNN) has demonstrated its success in multiple domai...
research
08/21/2019

Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Deep neural networks (DNNs) have demonstrated impressive performance on ...
research
04/14/2023

Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense

While having achieved great success in rich real-life applications, deep...
research
04/18/2022

Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge

Adversarial example attack endangers the mobile edge systems such as veh...
research
01/25/2019

Improving Adversarial Robustness via Promoting Ensemble Diversity

Though deep neural networks have achieved significant progress on variou...
research
08/01/2023

Dynamic ensemble selection based on Deep Neural Network Uncertainty Estimation for Adversarial Robustness

The deep neural network has attained significant efficiency in image rec...
research
06/06/2017

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

With growing interest in adversarial machine learning, it is important f...

Please sign up or login with your details

Forgot password? Click here to reset