Model Extraction and Active Learning

11/05/2018
by   Varun Chandrasekaran, et al.
14

Machine learning is being increasingly used by individuals, research institutions, and corporations. This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) - cloud services that provide (a) tools and resources to learn the model, and (b) a user-friendly query interface to access the model. However, such MLaaS systems raise privacy concerns, one being model extraction. Adversaries maliciously exploit the query interface to steal the model. More precisely, in a model extraction attack, a good approximation of a sensitive or proprietary model held by the server is extracted (i.e. learned) by a dishonest user. Such a user only sees the answers to select queries sent using the query interface. This attack was recently introduced by Tramer et al. at the 2016 USENIX Security Symposium, where practical attacks for different models were shown. We believe that better understanding the efficacy of model extraction attacks is paramount in designing better privacy-preserving MLaaS systems. To that end, we take the first step by (a) formalizing model extraction and proposing the first definition of extraction defense, and (b) drawing parallels between model extraction and the better investigated active learning framework. In particular, we show that recent advancements in the active learning domain can be used to implement both model extraction, and defenses against such attacks.

READ FULL TEXT
research
11/20/2017

Model Extraction Warning in MLaaS Paradigm

Cloud vendors are increasingly offering machine learning services as par...
research
03/26/2019

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

Active learning holds promise of significantly reducing data annotation ...
research
01/23/2022

Increasing the Cost of Model Extraction with Calibrated Proof of Work

In model extraction attacks, adversaries can steal a machine learning mo...
research
07/13/2023

Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success

The generations of large language models are commonly controlled through...
research
07/03/2023

Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems

With the emergence of large foundational models, model-serving systems a...
research
08/25/2023

Active learning for fast and slow modeling attacks on Arbiter PUFs

Modeling attacks, in which an adversary uses machine learning techniques...
research
11/30/2021

Living-Off-The-Land Command Detection Using Active Learning

In recent years, enterprises have been targeted by advanced adversaries ...

Please sign up or login with your details

Forgot password? Click here to reset