Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

11/26/2020
by   Abhishek Moitra, et al.
0

Deep Learning is able to solve a plethora of once impossible problems. However, they are vulnerable to input adversarial attacks preventing them from being autonomously deployed in critical applications. Several algorithm-centered works have discussed methods to cause adversarial attacks and improve adversarial robustness of a Deep Neural Network (DNN). In this work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to improve the adversarial robustness and cause adversarial attacks on DNNs. We show that bit-error noise in hybrid memories due to erroneous 6T-SRAM cells have deterministic behaviour based on the hybrid memory configurations (V_DD, 8T-6T ratio). This controlled noise (surgical noise) can be strategically introduced into specific DNN layers to improve the adversarial accuracy of DNNs. At the same time, surgical noise can be carefully injected into the DNN parameters stored in hybrid memory to cause adversarial attacks. To improve the adversarial robustness of DNNs using surgical noise, we propose a methodology to select appropriate DNN layers and their corresponding hybrid memory configurations to introduce the required surgical noise. Using this, we achieve 2-8 like FGSM, than the baseline models (with no surgical noise introduced). To demonstrate adversarial attacks using surgical noise, we design a novel, white-box attack on DNN parameters stored in hybrid memory banks that causes the DNN inference accuracy to drop by more than 60 value. We support our claims with experiments, performed using benchmark datasets-CIFAR10 and CIFAR100 on VGG19 and ResNet18 networks.

READ FULL TEXT
research
12/02/2021

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Approximate computing is known for its effectiveness in improvising the ...
research
10/04/2022

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...
research
06/02/2022

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

In the strong adversarial attacks against deep neural network (DNN), the...
research
03/08/2023

Exploring Adversarial Attacks on Neural Networks: An Explainable Approach

Deep Learning (DL) is being applied in various domains, especially in sa...
research
11/10/2021

Robust Learning via Ensemble Density Propagation in Deep Neural Networks

Learning in uncertain, noisy, or adversarial environments is a challengi...
research
02/23/2019

A Deep, Information-theoretic Framework for Robust Biometric Recognition

Deep neural networks (DNN) have been a de facto standard for nowadays bi...
research
11/27/2019

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...

Please sign up or login with your details

Forgot password? Click here to reset