First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data

09/30/2021
by   Masataka Tasumi, et al.
0

Model extraction attacks are a kind of attacks where an adversary obtains a machine learning model whose performance is comparable with one of the victim model through queries and their results. This paper presents a novel model extraction attack, named TEMPEST, applicable on tabular data under a practical data-free setting. Whereas model extraction is more challenging on tabular data due to normalization, TEMPEST no longer needs initial samples that previous attacks require; instead, it makes use of publicly available statistics to generate query samples. Experiments show that our attack can achieve the same level of performance as the previous attacks. Moreover, we identify that the use of mean and variance as statistics for query generation and the use of the same normalization process as the victim model can improve the performance of our attack. We also discuss a possibility whereby TEMPEST is executed in the real world through an experiment with a medical diagnosis dataset. We plan to release the source code for reproducibility and a reference to subsequent works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2021

MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI

The advance of explainable artificial intelligence, which provides reaso...
research
02/16/2023

Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data

We study black-box model stealing attacks where the attacker can query a...
research
06/08/2019

Making targeted black-box evasion attacks effective and efficient

We investigate how an adversary can optimally use its query budget for t...
research
11/30/2020

Data-Free Model Extraction

Current model extraction attacks assume that the adversary has access to...
research
05/06/2020

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Model Stealing (MS) attacks allow an adversary with black-box access to ...
research
02/01/2020

Model Extraction Attacks against Recurrent Neural Networks

Model extraction attacks are a kind of attacks in which an adversary obt...
research
11/06/2017

Adversarial Frontier Stitching for Remote Neural Network Watermarking

The state of the art performance of deep learning models comes at a high...

Please sign up or login with your details

Forgot password? Click here to reset