Adversarial Examples in Constrained Domains

by   Ryan Sheatsley, et al.

Machine learning algorithms have been shown to be vulnerable to adversarial manipulation through systematic modification of inputs (e.g., adversarial examples) in domains such as image recognition. Under the default threat model, the adversary exploits the unconstrained nature of images; each feature (pixel) is fully under control of the adversary. However, it is not clear how these attacks translate to constrained domains that limit which and how features can be modified by the adversary (e.g., network intrusion detection). In this paper, we explore whether constrained domains are less vulnerable than unconstrained domains to adversarial example generation algorithms. We create an algorithm for generating adversarial sketches: targeted universal perturbation vectors which encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in constrained (e.g., network intrusion detection) and unconstrained (e.g., image recognition) domains. The results demonstrate that our approaches generate misclassification rates in constrained domains that were comparable to those of unconstrained domains (greater than 95 narrow attack surface exposed by constrained domains is still sufficiently large to craft successful adversarial examples; and thus, constraints do not appear to make a domain robust. Indeed, with as little as five randomly selected features, one can still generate adversarial examples.


page 8

page 10

page 11

page 14


On the Robustness of Domain Constraints

Machine learning is vulnerable to adversarial examples-inputs designed t...

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentional...

A Deep Learning Approach to Create DNS Amplification Attacks

In recent years, deep learning has shown itself to be an incredibly valu...

On the (Statistical) Detection of Adversarial Examples

Machine Learning (ML) models are applied in a variety of tasks such as n...

Adversarial machine learning for protecting against online manipulation

Adversarial examples are inputs to a machine learning system that result...

Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

Adversarial attacks pose a major threat to machine learning and to the s...

A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space

The generation of feasible adversarial examples is necessary for properl...

Please sign up or login with your details

Forgot password? Click here to reset