DeepAI AI Chat
Log In Sign Up

Motivating the Rules of the Game for Adversarial Example Research

by   Justin Gilmer, et al.
Princeton University

Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.


page 9

page 14

page 18

page 22


Adversarial Example Defense via Perturbation Grading Strategy

Deep Neural Networks have been widely used in many fields. However, stud...

A Survey on Poisoning Attacks Against Supervised Machine Learning

With the rise of artificial intelligence and machine learning in modern ...

Rallying Adversarial Techniques against Deep Learning for Network Security

Recent advances in artificial intelligence and the increasing need for p...

Benchmarking Crimes: An Emerging Threat in Systems Security

Properly benchmarking a system is a difficult and intricate task. Unfort...

Dos and Don'ts of Machine Learning in Computer Security

With the growing processing power of computing systems and the increasin...

The Steep Road to Happily Ever After: An Analysis of Current Visual Storytelling Models

Visual storytelling is an intriguing and complex task that only recently...