Robustness and Transferability of Universal Attacks on Compressed Models

12/10/2020
by   Alberto G. Matachana, et al.
1

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

READ FULL TEXT

page 5

page 6

page 7

research
06/15/2022

Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning

The prevalence and success of Deep Neural Network (DNN) applications in ...
research
09/29/2018

To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression

As deep neural networks (DNNs) become widely used, pruned and quantised ...
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
04/30/2021

Stealthy Backdoors as Compression Artifacts

In a backdoor attack on a machine learning model, an adversary produces ...
research
07/12/2022

Adversarial Robustness Assessment of NeuroEvolution Approaches

NeuroEvolution automates the generation of Artificial Neural Networks th...
research
09/27/2022

FG-UAP: Feature-Gathering Universal Adversarial Perturbation

Deep Neural Networks (DNNs) are susceptible to elaborately designed pert...
research
05/16/2021

Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class of adve...

Please sign up or login with your details

Forgot password? Click here to reset