Extraction of Complex DNN Models: Real Threat or Boogeyman?

10/11/2019
by   Buse Gul Atli, et al.
0

Recently, machine learning (ML) has introduced advanced solutions to many domains. Since ML models provide business advantage to model owners, protecting intellectual property (IP) of ML models has emerged as an important consideration. Confidentiality of ML models can be protected by exposing them to clients only via prediction APIs. However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API. In this work, we question whether model extraction is a serious threat to complex, real-life ML models. We evaluate the current state-of-the-art model extraction attack (the Knockoff attack) against complex models. We reproduced and confirm the results in the Knockoff attack paper. But we also show that the performance of this attack can be limited by several factors, including ML model architecture and the granularity of API response. Furthermore, we introduce a defense based on distinguishing queries used for Knockoff attack from benign queries. Despite the limitations of the Knockoff attack, we show that a more realistic adversary can effectively steal complex ML models and evade known defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

DAWN: Dynamic Adversarial Watermarking of Neural Networks

Training machine learning (ML) models is expensive in terms of computati...
research
05/07/2018

PRADA: Protecting against DNN Model Stealing Attacks

As machine learning (ML) applications become increasingly prevalent, pro...
research
10/21/2019

Crypto Mining Makes Noise

A new cybersecurity attack (cryptojacking) is emerging, in both the lite...
research
09/19/2023

Model Leeching: An Extraction Attack Targeting LLMs

Model Leeching is a novel extraction attack targeting Large Language Mod...
research
02/21/2022

ICSML: Industrial Control Systems Machine Learning inference framework natively executing on IEC 61131-3 languages

Industrial Control Systems (ICS) have played a catalytic role in enablin...
research
03/13/2022

Generating Practical Adversarial Network Traffic Flows Using NIDSGAN

Network intrusion detection systems (NIDS) are an essential defense for ...
research
05/31/2022

Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine Model Agreement

Marketplaces for machine learning (ML) models are emerging as a way for ...

Please sign up or login with your details

Forgot password? Click here to reset