BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

03/14/2021
by   Manoj Rohit Vemparala, et al.
8

Deploying convolutional neural networks (CNNs) for embedded applications presents many challenges in balancing resource-efficiency and task-related accuracy. These two aspects have been well-researched in the field of CNN compression. In real-world applications, a third important aspect comes into play, namely the robustness of the CNN. In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C W, DeepFool, LocalSearch and GenAttack). These new insights facilitate defensive training schemes or reactive filtering methods, where the attack is detected and the input is discarded and/or cleaned. Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets. We present evaluation methods to simplify the comparison between CNNs under different attack schemes using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM). Our analysis reveals susceptible behavior of uncompressed and pruned CNNs against all kinds of attacks. The distilled models exhibit their strength against all white box attacks with an exception of C W. Furthermore, binary neural networks exhibit resilient behavior compared to their baselines and other compressed variants.

READ FULL TEXT

page 2

page 9

page 11

page 14

research
07/21/2019

Open DNN Box by Power Side-Channel Attack

Deep neural networks are becoming popular and important assets of many A...
research
12/23/2019

White Noise Analysis of Neural Networks

A white noise analysis of modern deep neural networks is presented to un...
research
06/13/2020

Defensive Approximation: Enhancing CNNs Security through Approximate Computing

In the past few years, an increasing number of machine-learning and deep...
research
04/12/2020

Verification of Deep Convolutional Neural Networks Using ImageStars

Convolutional Neural Networks (CNN) have redefined the state-of-the-art ...
research
10/14/2021

Interactive Analysis of CNN Robustness

While convolutional neural networks (CNNs) have found wide adoption as s...
research
05/19/2020

Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

Convolutional neural network (CNN) has surpassed traditional methods for...
research
12/01/2018

Effects of Loss Functions And Target Representations on Adversarial Robustness

Understanding and evaluating the robustness of neural networks against a...

Please sign up or login with your details

Forgot password? Click here to reset