When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

03/19/2018
by   Octavian Suciu, et al.
0

Attacks against machine learning systems represent a growing threat as highlighted by the abundance of attacks proposed lately. However, attacks often make unrealistic assumptions about the knowledge and capabilities of adversaries. To evaluate this threat systematically, we propose the FAIL attacker model, which describes the adversary's knowledge and control along four dimensions. The FAIL model allows us to consider a wide range of weaker adversaries that have limited control and incomplete knowledge of the features, learning algorithms and training instances utilized. Within this framework, we evaluate the generalized transferability of a known evasion attack and we design StingRay, a targeted poisoning attack that is broadly applicable---it is practical against 4 machine learning applications, which use 3 different learning algorithms, and it can bypass 2 existing defenses. Our evaluation provides deeper insights into the transferability of poison and evasion samples across models and suggests promising directions for investigating defenses against this threat.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2023

Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

Evasion attacks are a threat to machine learning models, where adversari...
research
09/08/2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

Transferability captures the ability of an attack against a machine-lear...
research
03/26/2020

Obliviousness Makes Poisoning Adversaries Weaker

Poisoning attacks have emerged as a significant security threat to machi...
research
05/02/2021

Who's Afraid of Adversarial Transferability?

Adversarial transferability, namely the ability of adversarial perturbat...
research
08/31/2023

Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack

The vulnerabilities to backdoor attacks have recently threatened the tru...
research
04/18/2022

Automatic Hardware Trojan Insertion using Machine Learning

Due to the current horizontal business model that promotes increasing re...
research
12/03/2021

Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

Transferability of adversarial samples became a serious concern due to t...

Please sign up or login with your details

Forgot password? Click here to reset