EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles

04/06/2023
by   Jonah O'Brien Weiss, et al.
0

Deep Neural Networks (DNNs) have become ubiquitous due to their performance on prediction and classification problems. However, they face a variety of threats as their usage spreads. Model extraction attacks, which steal DNNs, endanger intellectual property, data privacy, and security. Previous research has shown that system-level side-channels can be used to leak the architecture of a victim DNN, exacerbating these risks. We propose two DNN architecture extraction techniques catering to various threat models. The first technique uses a malicious, dynamically linked version of PyTorch to expose a victim DNN architecture through the PyTorch profiler. The second, called EZClone, exploits aggregate (rather than time-series) GPU profiles as a side-channel to predict DNN architecture, employing a simple approach and assuming little adversary capability as compared to previous work. We investigate the effectiveness of EZClone when minimizing the complexity of the attack, when applied to pruned models, and when applied across GPUs. We find that EZClone correctly predicts DNN architectures for the entire set of PyTorch vision architectures with 100 accuracy. No other work has shown this degree of architecture prediction accuracy with the same adversarial constraints or using aggregate side-channel information. Prior work has shown that, once a DNN has been successfully cloned, further attacks such as model evasion or model inversion can be accelerated significantly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2023

DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing

Extracting the architecture of layers of a given deep neural network (DN...
research
06/01/2022

NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks

The advancements of deep neural networks (DNNs) have led to their deploy...
research
08/14/2018

Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures

Deep Neural Networks (DNNs) are fast becoming ubiquitous for their abili...
research
07/21/2022

Careful What You Wish For: on the Extraction of Adversarially Trained Models

Recent attacks on Machine Learning (ML) models such as evasion attacks w...
research
11/08/2021

DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

Recent advancements of Deep Neural Networks (DNNs) have seen widespread ...
research
09/21/2023

DeepTheft: Stealing DNN Model Architectures through Power Side Channel

Deep Neural Network (DNN) models are often deployed in resource-sharing ...
research
06/23/2020

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Deep Neural Networks (DNNs) models become one of the most valuable enter...

Please sign up or login with your details

Forgot password? Click here to reset