"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice

12/29/2022
by   Giovanni Apruzzese, et al.
0

Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.

READ FULL TEXT

page 1

page 22

page 23

page 24

research
02/04/2020

Adversarial Machine Learning – Industry Perspectives

Based on interviews with 28 organizations, we found that industry practi...
research
07/11/2021

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

Attacks from adversarial machine learning (ML) have the potential to be ...
research
04/30/2023

SoK: Pragmatic Assessment of Machine Learning for Network Intrusion Detection

Machine Learning (ML) has become a valuable asset to solve many real-wor...
research
11/19/2019

Deep Detector Health Management under Adversarial Campaigns

Machine learning models are vulnerable to adversarial inputs that induce...
research
12/03/2020

Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning

This paper critically assesses the adequacy and representativeness of ph...
research
10/24/2022

Cybersecurity in the Smart Grid: Practitioners' Perspective

The Smart Grid (SG) is a cornerstone of modern society, providing the en...
research
07/11/2022

"Why do so?" – A Practical Perspective on Machine Learning Security

Despite the large body of academic work on machine learning security, li...

Please sign up or login with your details

Forgot password? Click here to reset