A Tale of Two Models: Constructing Evasive Attacks on Edge Models

04/22/2022
by   Wei Hao, et al.
0

Full-precision deep learning models are typically too large or costly to deploy on edge devices. To accommodate to the limited hardware resources, models are adapted to the edge using various edge-adaptation techniques, such as quantization and pruning. While such techniques may have a negligible impact on top-line accuracy, the adapted models exhibit subtle differences in output compared to the original model from which they are derived. In this paper, we introduce a new evasive attack, DIVA, that exploits these differences in edge adaptation, by adding adversarial noise to input data that maximizes the output difference between the original and adapted model. Such an attack is particularly dangerous, because the malicious input will trick the adapted model running on the edge, but will be virtually undetectable by the original model, which typically serves as the authoritative model version, used for validation, debugging and retraining. We compare DIVA to a state-of-the-art attack, PGD, and show that DIVA is only 1.7-3.6 model but 1.9-4.2 times more likely not to be detected by the the original model under a whitebox and semi-blackbox setting, compared to PGD.

READ FULL TEXT

page 5

page 6

page 10

page 11

research
01/06/2023

Adversarial Attacks on Neural Models of Code via Code Difference Reduction

Deep learning has been widely used to solve various code-based tasks by ...
research
01/15/2018

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to pr...
research
09/16/2020

MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework

With the tremendous success of deep learning, there exists imminent need...
research
10/18/2020

Characterizing and Taming Model Instability Across Edge Devices

The same machine learning model running on different edge devices may pr...
research
05/31/2019

Bypassing Backdoor Detection Algorithms in Deep Learning

Deep learning models are known to be vulnerable to various adversarial m...
research
02/20/2023

Poisoning Web-Scale Training Datasets is Practical

Deep learning models are often trained on distributed, webscale datasets...
research
07/20/2023

PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks

Collaborative inference has been a promising solution to enable resource...

Please sign up or login with your details

Forgot password? Click here to reset