Robust Deep Learning Ensemble against Deception

09/14/2020
by   Wenqi Wei, et al.
0

Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This paper presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using eleven popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.

READ FULL TEXT

page 2

page 6

page 8

10/01/2019

Cross-Layer Strategic Ensemble Defense Against Adversarial Examples

Deep neural network (DNN) has demonstrated its success in multiple domai...
08/21/2019

Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Deep neural networks (DNNs) have demonstrated impressive performance on ...
04/18/2022

Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge

Adversarial example attack endangers the mobile edge systems such as veh...
12/02/2019

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

In Machine Learning as a Service, a provider trains a deep neural networ...
02/18/2022

Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem

Despite multiple efforts made towards robust machine learning (ML) model...
06/06/2017

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

With growing interest in adversarial machine learning, it is important f...
11/10/2020

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

Deep neural networks are vulnerable to evasion attacks, i.e., carefully ...