Dual Student Networks for Data-Free Model Stealing

09/18/2023
by   James Beetham, et al.
0

Existing data-free model stealing methods use a generator to produce samples in order to train a student model to match the target model outputs. To this end, the two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples that thoroughly explores the input space. We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on. On one hand, disagreement on a sample implies at least one student has classified the sample incorrectly when compared to the target model. This incentive towards disagreement implicitly encourages the generator to explore more diverse regions of the input space. On the other hand, our method utilizes gradients of student models to indirectly estimate gradients of the target model. We show that this novel training objective for the generator network is equivalent to optimizing a lower bound on the generator's loss if we had access to the target model gradients. We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets. Additionally, our approach balances improved query efficiency with training computation cost. Finally, we demonstrate that our method serves as a better proxy model for transfer-based adversarial attacks than existing data-free model stealing methods.

READ FULL TEXT

page 15

page 16

research
04/23/2022

Towards Data-Free Model Stealing in a Hard Label Setting

Machine learning models deployed as a service (MLaaS) are susceptible to...
research
09/21/2022

Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation

Data-free Knowledge Distillation (DFKD) has attracted attention recently...
research
03/20/2020

Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN

Recent advances in deep learning have provided procedures for learning o...
research
04/02/2019

Data-Free Learning of Student Networks

Learning portable neural networks is very essential for computer vision ...
research
02/28/2023

Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation

Data-free Knowledge Distillation (DFKD) has gained popularity recently, ...
research
12/01/2019

Adaptive Divergence for Rapid Adversarial Optimization

Adversarial Optimization (AO) provides a reliable, practical way to matc...
research
11/07/2020

Robustness and Diversity Seeking Data-Free Knowledge Distillation

Knowledge distillation (KD) has enabled remarkable progress in model com...

Please sign up or login with your details

Forgot password? Click here to reset