Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

06/27/2023
by   Marco Alecci, et al.
0

Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called transferability. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim's organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the DUMB attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: Dataset soUrces, Model architecture, and the Balance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance.

READ FULL TEXT

page 11

page 15

page 16

research
03/19/2018

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Attacks against machine learning systems represent a growing threat as h...
research
09/08/2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

Transferability captures the ability of an attack against a machine-lear...
research
09/23/2021

Adversarial Transfer Attacks With Unknown Data and Class Overlap

The ability to transfer adversarial attacks from one model (the surrogat...
research
05/22/2019

Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

In this work, we investigate the concept of biometric backdoors: a templ...
research
12/03/2021

Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

Transferability of adversarial samples became a serious concern due to t...
research
01/05/2021

Practical Blind Membership Inference Attack via Differential Comparisons

Membership inference (MI) attacks affect user privacy by inferring wheth...
research
04/07/2022

Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings

One intriguing property of adversarial attacks is their "transferability...

Please sign up or login with your details

Forgot password? Click here to reset