I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

06/16/2022
by   Daryna Oliynyk, et al.
0

Machine Learning-as-a-Service (MLaaS) has become a widespread paradigm, making even the most complex machine learning models available for clients via e.g. a pay-per-query principle. This allows users to avoid time-consuming processes of data collection, hyperparameter tuning, and model training. However, by giving their customers access to the (predictions of their) models, MLaaS providers endanger their intellectual property, such as sensitive training data, optimised hyperparameters, or learned model parameters. Adversaries can create a copy of the model with (almost) identical behavior using the the prediction labels only. While many variants of this attack have been described, only scattered defence strategies have been proposed, addressing isolated threats. This raises the necessity for a thorough systematisation of the field of model stealing, to arrive at a comprehensive understanding why these attacks are successful, and how they could be holistically defended against. We address this by categorising and comparing model stealing attacks, assessing their performance, and exploring corresponding defence techniques in different settings. We propose a taxonomy for attack and defence approaches, and provide guidelines on how to select the right attack or defence strategy based on the goal and available resources. Finally, we analyse which defences are rendered less effective by current attack strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2022

Holistic risk assessment of inference attacks in machine learning

As machine learning expanding application, there are more and more unign...
research
09/21/2020

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Deep neural networks (DNNs) have become the essential components for var...
research
07/04/2023

Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks

Distributed Collaborative Machine Learning (DCML) is a potential alterna...
research
05/18/2022

Property Unlearning: A Defense Strategy Against Property Inference Attacks

During the training of machine learning models, they may store or "learn...
research
04/28/2023

Earning Extra Performance from Restrictive Feedbacks

Many machine learning applications encounter a situation where model pro...
research
04/28/2023

Deep Intellectual Property: A Survey

With the widespread application in industrial manufacturing and commerci...
research
05/23/2021

Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

Machine learning algorithms are vulnerable to poisoning attacks, where a...

Please sign up or login with your details

Forgot password? Click here to reset