Data-Free Model Extraction Attacks in the Context of Object Detection

08/09/2023
by   Harshit Shah, et al.
0

A significant number of machine learning models are vulnerable to model extraction attacks, which focus on stealing the models by using specially curated queries against the target model. This task is well accomplished by using part of the training data or a surrogate dataset to train a new model that mimics a target model in a white-box environment. In pragmatic situations, however, the target models are trained on private datasets that are inaccessible to the adversary. The data-free model extraction technique replaces this problem when it comes to using queries artificially curated by a generator similar to that used in Generative Adversarial Nets. We propose for the first time, to the best of our knowledge, an adversary black box attack extending to a regression problem for predicting bounding box coordinates in object detection. As part of our study, we found that defining a loss function and using a novel generator setup is one of the key aspects in extracting the target model. We find that the proposed model extraction method achieves significant results by using reasonable queries. The discovery of this object detection vulnerability will support future prospects for securing such models.

READ FULL TEXT
research
11/30/2020

Data-Free Model Extraction

Current model extraction attacks assume that the adversary has access to...
research
09/26/2019

GAMIN: An Adversarial Approach to Black-Box Model Inversion

Recent works have demonstrated that machine learning models are vulnerab...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...
research
08/13/2023

MDB: Interactively Querying Datasets and Models

As models are trained and deployed, developers need to be able to system...
research
10/02/2020

Query complexity of adversarial attacks

Modern machine learning models are typically highly accurate but have be...
research
05/06/2020

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Model Stealing (MS) attacks allow an adversary with black-box access to ...
research
04/26/2021

Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks

Machine learning models are typically made available to potential client...

Please sign up or login with your details

Forgot password? Click here to reset