The Space of Adversarial Strategies

09/09/2022
by   Ryan Sheatsley, et al.
5

Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable definitions of optimality. In this paper, we propose a systematic approach to characterize worst-case (i.e., optimal) adversaries. We first introduce an extensible decomposition of attacks in adversarial machine learning by atomizing attack components into surfaces and travelers. With our decomposition, we enumerate over components to create 576 attacks (568 of which were previously unexplored). Next, we propose the Pareto Ensemble Attack (PEA): a theoretical attack that upper-bounds attack performance. With our new attacks, we measure performance relative to the PEA on: both robust and non-robust models, seven datasets, and three extended lp-based threat models incorporating compute costs, formalizing the Space of Adversarial Strategies. From our evaluation we find that attack performance to be highly contextual: the domain, model robustness, and threat model can have a profound influence on attack efficacy. Our investigation suggests that future studies measuring the security of machine learning should: (1) be contextualized to the domain threat models, and (2) go beyond the handful of known attacks used today.

READ FULL TEXT

page 9

page 11

page 20

page 21

research
07/15/2020

A Survey of Privacy Attacks in Machine Learning

As machine learning becomes more widely used, the need to study its impl...
research
06/29/2023

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Machine-learning models are known to be vulnerable to evasion attacks th...
research
02/18/2022

Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness

From past couple of years there is a cycle of researchers proposing a de...
research
04/05/2023

Rethinking the Trigger-injecting Position in Graph Backdoor Attack

Backdoor attacks have been demonstrated as a security threat for machine...
research
09/17/2020

MultAV: Multiplicative Adversarial Videos

The majority of adversarial machine learning research focuses on additiv...
research
07/06/2022

Adversarial Robustness of Visual Dialog

Adversarial robustness evaluates the worst-case performance scenario of ...
research
03/14/2019

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...

Please sign up or login with your details

Forgot password? Click here to reset