A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

03/14/2019
by   Ian Goodfellow, et al.
14

In this article I describe a research agenda for securing machine learning models against adversarial inputs at test time. This article does not present results but instead shares some of my thoughts about where I think that the field needs to go. Modern machine learning works very well on I.I.D. data: data for which each example is drawn independently and for which the distribution generating each example is identical. When these assumptions are relaxed, modern machine learning can perform very poorly. When machine learning is used in contexts where security is a concern, it is desirable to design models that perform well even when the input is designed by a malicious adversary. So far most research in this direction has focused on an adversary who violates the identical assumption, and imposes some kind of restricted worst-case distribution shift. I argue that machine learning security researchers should also address the problem of relaxing the independence assumption and that current strategies designed for robustness to distribution shift will not do so. I recommend dynamic models that change each time they are run as a potential solution path to this problem, and show an example of a simple attack using correlated data that can be mitigated by a simple dynamic defense. This is not intended as a real-world security measure, but as a recommendation to explore this research direction and develop more realistic defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2019

Adversarial Examples in Modern Machine Learning: A Review

Recent research has found that many families of machine learning models ...
research
03/24/2018

A Dynamic-Adversarial Mining Approach to the Security of Machine Learning

Operating in a dynamic real world environment requires a forward thinkin...
research
03/03/2020

Analyzing Accuracy Loss in Randomized Smoothing Defenses

Recent advances in machine learning (ML) algorithms, especially deep neu...
research
09/09/2022

The Space of Adversarial Strategies

Adversarial examples, inputs designed to induce worst-case behavior in m...
research
08/22/2023

Designing an attack-defense game: how to increase robustness of financial transaction models via a competition

Given the escalating risks of malicious attacks in the finance sector an...
research
06/01/2022

Stopping Silent Sneaks: Defending against Malicious Mixes with Topological Engineering

Mixnets are a fundamental type of anonymous communication system and rec...
research
07/03/2022

Identifying the Context Shift between Test Benchmarks and Production Data

Across a wide variety of domains, there exists a performance gap between...

Please sign up or login with your details

Forgot password? Click here to reset