Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

06/26/2019
by   Tribhuvanesh Orekondy, et al.
0

With the advances of ML models in recent years, we are seeing an increasing number of real-world commercial applications and services e.g., autonomous vehicles, medical equipment, web APIs emerge. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such ML applications, which require a lot of time, money, and effort to develop. In this paper, we address the issue by studying defenses for model stealing attacks, largely motivated by a lack of effective defenses in literature. We work towards the first defense which introduces targeted perturbations to the model predictions under a utility constraint. Our approach introduces the perturbations targeted towards manipulating the training procedure of the attacker. We evaluate our approach on multiple datasets and attack scenarios across a range of utility constrains. Our results show that it is indeed possible to trade-off utility (e.g., deviation from original prediction, test accuracy) to significantly reduce effectiveness of model stealing attacks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/16/2020

TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)

Neural backdoors represent one primary threat to the security of deep le...
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...
11/16/2019

Defending Against Model Stealing Attacks with Adaptive Misinformation

Deep Neural Networks (DNNs) are susceptible to model stealing attacks, w...
11/22/2021

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...
05/07/2018

PRADA: Protecting against DNN Model Stealing Attacks

As machine learning (ML) applications become increasingly prevalent, pro...
08/21/2020

Defending Regression Learners Against Poisoning Attacks

Regression models, which are widely used from engineering applications t...
02/19/2020

NNoculation: Broad Spectrum and Targeted Treatment of Backdoored DNNs

This paper proposes a novel two-stage defense (NNoculation) against back...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.