A Framework for Understanding Model Extraction Attack and Defense

06/23/2022
by   Xun Xian, et al.
0

The privacy of machine learning models has become a significant concern in many emerging Machine-Learning-as-a-Service applications, where prediction services based on well-trained models are offered to users via pay-per-query. The lack of a defense mechanism can impose a high risk on the privacy of the server's model since an adversary could efficiently steal the model by querying only a few `good' data points. The interplay between a server's defense and an adversary's attack inevitably leads to an arms race dilemma, as commonly seen in Adversarial Machine Learning. To study the fundamental tradeoffs between model utility from a benign user's view and privacy from an adversary's view, we develop new metrics to quantify such tradeoffs, analyze their theoretical properties, and develop an optimization problem to understand the optimal adversarial attack and defense strategies. The developed concepts and theory match the empirical findings on the `equilibrium' between privacy and utility. In terms of optimization, the key ingredient that enables our results is a unified representation of the attack-defense problem as a min-max bi-level problem. The developed results will be demonstrated by examples and experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2018

Machine Learning with Membership Privacy using Adversarial Regularization

Machine learning models leak information about the datasets on which the...
research
05/31/2018

Defending Against Model Stealing Attacks Using Deceptive Perturbations

Machine learning models are vulnerable to simple model stealing attacks ...
research
12/22/2022

Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning

This paper presents a game-theoretic framework to study the interactions...
research
08/30/2020

Imitation Privacy

In recent years, there have been many cloud-based machine learning servi...
research
08/16/2018

Distributionally Adversarial Attack

Recent work on adversarial attack has shown that Projected Gradient Desc...
research
06/21/2020

With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Online Regression Models

With the rise of third parties in the machine learning pipeline, the ser...
research
12/08/2022

Vicious Classifiers: Data Reconstruction Attack at Inference Time

Privacy-preserving inference via edge or encrypted computing paradigms e...

Please sign up or login with your details

Forgot password? Click here to reset