Towards Data-Free Model Stealing in a Hard Label Setting

by   Sunandini Sanyal, et al.

Machine learning models deployed as a service (MLaaS) are susceptible to model stealing attacks, where an adversary attempts to steal the model within a restricted access framework. While existing attacks demonstrate near-perfect clone-model performance using softmax predictions of the classification network, most of the APIs allow access to only the top-1 labels. In this work, we show that it is indeed possible to steal Machine Learning models by accessing only top-1 predictions (Hard Label setting) as well, without access to model gradients (Black-Box setting) or even the training dataset (Data-Free setting) within a low query budget. We propose a novel GAN-based framework that trains the student and generator in tandem to steal the model effectively while overcoming the challenge of the hard label setting by utilizing gradients of the clone network as a proxy to the victim's gradients. We propose to overcome the large query costs associated with a typical Data-Free setting by utilizing publicly available (potentially unrelated) datasets as a weak image prior. We additionally show that even in the absence of such data, it is possible to achieve state-of-the-art results within a low query budget using synthetically crafted samples. We are the first to demonstrate the scalability of Model Stealing in a restricted access setting on a 100 class dataset as well.


page 7

page 11

page 14

page 15


MEGA: Model Stealing via Collaborative Generator-Substitute Networks

Deep machine learning models are increasingly deployedin the wild for pr...

Dual Student Networks for Data-Free Model Stealing

Existing data-free model stealing methods use a generator to produce sam...

A framework for the extraction of Deep Neural Networks by leveraging public data

Machine learning models trained on confidential datasets are increasingl...

Low-Budget Unsupervised Label Query through Domain Alignment Enforcement

Deep learning revolution happened thanks to the availability of a massiv...

Data-Free Model Extraction

Current model extraction attacks assume that the adversary has access to...

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

We present Survival-OPT, a physical adversarial example algorithm in the...

Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

State-of-the-art machine learning models are vulnerable to data poisonin...

Please sign up or login with your details

Forgot password? Click here to reset