DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

11/08/2021
by   Adnan Siraj Rakin, et al.
0

Recent advancements of Deep Neural Networks (DNNs) have seen widespread deployment in multiple security-sensitive domains. The need of resource-intensive training and use of valuable domain-specific training data have made these models a top intellectual property (IP) for model owners. One of the major threats to the DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures) However, to date, existing attacks cannot extract detailed model parameters (e.g., weights/biases). In this work, for the first time, we propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer based hardware fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailed for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate this substitute model extraction method on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNet/VGG-11). The extracted substitute model has successfully achieved more than 90 for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model.

READ FULL TEXT
research
03/30/2020

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

Security of machine learning is increasingly becoming a major concern du...
research
09/06/2022

Side-channel attack analysis on in-memory computing architectures

In-memory computing (IMC) systems have great potential for accelerating ...
research
06/23/2020

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Deep Neural Networks (DNNs) models become one of the most valuable enter...
research
08/29/2022

Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System

The deep neural network (DNN) models are deemed confidential due to thei...
research
06/20/2023

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Machine Learning as a Service (MLaaS) platforms have gained popularity d...
research
04/06/2023

EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles

Deep Neural Networks (DNNs) have become ubiquitous due to their performa...
research
11/10/2022

A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters

Model extraction is a major threat for embedded deep neural network mode...

Please sign up or login with your details

Forgot password? Click here to reset