Publishing Efficient On-device Models Increases Adversarial Vulnerability

12/28/2022
by   Sanghyun Hong, et al.
0

Recent increases in the computational demands of deep neural networks (DNNs) have sparked interest in efficient deep learning mechanisms, e.g., quantization or pruning. These mechanisms enable the construction of a small, efficient version of commercial-scale models with comparable accuracy, accelerating their deployment to resource-constrained devices. In this paper, we study the security considerations of publishing on-device variants of large-scale models. We first show that an adversary can exploit on-device models to make attacking the large models easier. In evaluations across 19 DNNs, by exploiting the published on-device models as a transfer prior, the adversarial vulnerability of the original commercial-scale models increases by up to 100x. We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase. Based on the insights, we propose a defense, similarity-unpairing, that fine-tunes on-device models with the objective of reducing the similarity. We evaluated our defense on all the 19 DNNs and found that it reduces the transferability up to 90 suggest that further research is needed on the security (or even privacy) threats caused by publishing those efficient siblings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2018

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

Deep Neural Networks (DNNs) have recently been shown vulnerable to adver...
research
09/29/2018

To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression

As deep neural networks (DNNs) become widely used, pruned and quantised ...
research
10/11/2022

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

The adversarial vulnerability of deep neural networks (DNNs) has been ac...
research
09/07/2021

Adversarial Parameter Defense by Multi-Step Risk Minimization

Previous studies demonstrate DNNs' vulnerability to adversarial examples...
research
10/06/2020

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Recent increases in the computational demands of deep neural networks (D...
research
10/07/2020

Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features

Batch normalization (BN) has been widely used in modern deep neural netw...
research
08/31/2021

Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

Recently published attacks against deep neural networks (DNNs) have stre...

Please sign up or login with your details

Forgot password? Click here to reset