DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device

08/03/2021
by   Yoo-Seung Won, et al.
0

EdgeML accelerators like Intel Neural Compute Stick 2 (NCS) can enable efficient edge-based inference with complex pre-trained models. The models are loaded in the host (like Raspberry Pi) and then transferred to NCS for inference. In this paper, we demonstrate practical and low-cost cold boot based model recovery attacks on NCS to recover the model architecture and weights, loaded from the Raspberry Pi. The architecture is recovered with 100 and weights with an error rate of 0.04 accuracy loss of 0.5 transfer of adversarial examples. We further extend our study to other cold boot attack setups reported in the literature with higher error rates leading to accuracy loss as high as 70 knowledge distillation to correct the erroneous weights in recovered model, even without access to original training data. The proposed attack remains unaffected by the model encryption features of the OpenVINO and NCS framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2020

Adversarial Imitation Attack

Deep learning models are known to be vulnerable to adversarial examples....
research
10/14/2020

Weight Squeezing: Reparameterization for Compression and Fast Inference

In this work, we present a novel approach for simultaneous knowledge tra...
research
09/03/2019

High-Fidelity Extraction of Neural Network Models

Model extraction allows an adversary to steal a copy of a remotely deplo...
research
05/28/2023

Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

Data-free meta-learning (DFML) aims to enable efficient learning of new ...
research
06/27/2023

A Highly Accurate Query-Recovery Attack against Searchable Encryption using Non-Indexed Documents

Cloud data storage solutions offer customers cost-effective and reduced ...
research
03/07/2023

SALSA PICANTE: a machine learning attack on LWE with binary secrets

The Learning With Errors (LWE) problem is one of the major hard problems...
research
10/28/2022

Learning to Immunize Images for Tamper Localization and Self-Recovery

Digital images are vulnerable to nefarious tampering attacks such as con...

Please sign up or login with your details

Forgot password? Click here to reset