Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks

11/02/2022
by   Amira Guesmi, et al.
0

Machine-learning architectures, such as Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks: inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine-learning classifiers. We show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: in this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5, Alexnet and VGG-11 CNNs considerably with up to 50 due to the simpler nature of the approximate logic.

READ FULL TEXT

page 5

page 7

page 11

page 14

research
06/13/2020

Defensive Approximation: Enhancing CNNs Security through Approximate Computing

In the past few years, an increasing number of machine-learning and deep...
research
11/25/2019

Playing it Safe: Adversarial Robustness with an Abstain Option

We explore adversarial robustness in the setting in which it is acceptab...
research
09/05/2018

Bridging machine learning and cryptography in defence against adversarial attacks

In the last decade, deep learning algorithms have become very popular th...
research
09/07/2022

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Spiking neural networks (SNNs) have attracted much attention for their h...
research
06/10/2020

Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

Being an emerging class of in-memory computing architecture, brain-inspi...
research
03/22/2021

Fast Approximate Spectral Normalization for Robust Deep Neural Networks

Deep neural networks (DNNs) play an important role in machine learning d...
research
06/17/2021

Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks

To evaluate the robustness gain of Bayesian neural networks on image cla...

Please sign up or login with your details

Forgot password? Click here to reset