Legal Risks of Adversarial Machine Learning Research

06/29/2020
by   Ram Shankar Siva Kumar, et al.
0

Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

READ FULL TEXT

page 1

page 5

page 10

research
10/25/2018

Law and Adversarial Machine Learning

When machine learning systems fail because of adversarial manipulation, ...
research
02/01/2020

Politics of Adversarial Machine Learning

In addition to their security properties, adversarial machine-learning a...
research
07/11/2021

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

Attacks from adversarial machine learning (ML) have the potential to be ...
research
12/20/2022

Learned Systems Security

A learned system uses machine learning (ML) internally to improve perfor...
research
06/23/2022

Non-Determinism and the Lawlessness of ML Code

Legal literature on machine learning (ML) tends to focus on harms, and a...
research
06/16/2023

You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks

The robustness of modern machine learning (ML) models has become an incr...
research
07/15/2020

Facial Recognition: A cross-national Survey on Public Acceptance, Privacy, and Discrimination

With rapid advances in machine learning (ML), more of this technology is...

Please sign up or login with your details

Forgot password? Click here to reset