DeepAI AI Chat
Log In Sign Up

Morphence: Moving Target Defense Against Adversarial Examples

by   Abderrahmen Amich, et al.
University of Michigan

Robustness to adversarial examples of machine learning models remains an open topic of research. Attacks often succeed by repeatedly probing a fixed target model with adversarial examples purposely crafted to fool it. In this paper, we introduce Morphence, an approach that shifts the defense landscape by making a model a moving target against adversarial examples. By regularly moving the decision function of a model, Morphence makes it significantly challenging for repeated or correlated attacks to succeed. Morphence deploys a pool of models generated from a base model in a manner that introduces sufficient randomness when it responds to prediction queries. To ensure repeated or correlated attacks fail, the deployed pool of models automatically expires after a query budget is reached and the model pool is seamlessly replaced by a new model pool generated in advance. We evaluate Morphence on two benchmark image classification datasets (MNIST and CIFAR10) against five reference attacks (2 white-box and 3 black-box). In all cases, Morphence consistently outperforms the thus-far effective defense, adversarial training, even in the face of strong white-box attacks, while preserving accuracy on clean data.


page 1

page 2

page 3

page 4


Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection

Evasion attacks against machine learning models often succeed via iterat...

Adversarial Logit Pairing

In this paper, we develop improved techniques for defending against adve...

Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries

In a black-box setting, the adversary only has API access to the target ...

AuxBlocks: Defense Adversarial Example via Auxiliary Blocks

Deep learning models are vulnerable to adversarial examples, which poses...

Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution

Most current methods generate adversarial examples with the L_p norm spe...

The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms

With further development in the fields of computer vision, network secur...

Multi-way Encoding for Robustness

Deep models are state-of-the-art for many computer vision tasks includin...