How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection

06/28/2022
by   Mantas Mazeika, et al.
12

Model stealing attacks present a dilemma for public machine learning APIs. To protect financial investments, companies may be forced to withhold important information about their models that could facilitate theft, including uncertainty estimates and prediction explanations. This compromise is harmful not only to users but also to external transparency. Model stealing defenses seek to resolve this dilemma by making models harder to steal while preserving utility for benign users. However, existing defenses have poor performance in practice, either requiring enormous computational overheads or severe utility trade-offs. To meet these challenges, we present a new approach to model stealing defenses called gradient redirection. At the core of our approach is a provably optimal, efficient algorithm for steering an adversary's training updates in a targeted manner. Combined with improvements to surrogate networks and a novel coordinated defense strategy, our gradient redirection defense, called GRAD^2, achieves small utility trade-offs and low computational overhead, outperforming the best prior defenses. Moreover, we demonstrate how gradient redirection enables reprogramming the adversary with arbitrary behavior, which we hope will foster work on new avenues of defense.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2023

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Despite the broad application of Machine Learning models as a Service (M...
research
12/16/2020

TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)

Neural backdoors represent one primary threat to the security of deep le...
research
05/28/2019

An Investigation of Data Poisoning Defenses for Online Learning

We consider data poisoning attacks, where an adversary can modify a smal...
research
06/26/2019

Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

With the advances of ML models in recent years, we are seeing an increas...
research
10/12/2020

Bankrupting Sybil Despite Churn

A Sybil attack occurs when an adversary pretends to be multiple identiti...
research
06/06/2023

Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations

Federated Learning (FL) trains machine learning models on data distribut...
research
03/03/2019

Decision-Focused Learning of Adversary Behavior in Security Games

Stackelberg security games are a critical tool for maximizing the utilit...

Please sign up or login with your details

Forgot password? Click here to reset