Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

08/25/2020
by   Abhiroop Bhattacharjee, et al.
9

Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks. With a growing need to enable intelligence in embedded devices in this Internet of Things (IoT) era, secure hardware implementation of DNNs has become imperative. Memristive crossbars, being able to perform Matrix-Vector-Multiplications (MVMs) efficiently, are used to realize DNNs on hardware. However, crossbar non-idealities have always been devalued since they cause errors in performing MVMs, leading to degradation in the accuracy of the DNNs. Several software-based adversarial defenses have been proposed in the past to make DNNs adversarially robust. However, no previous work has demonstrated the advantage conferred by the non-idealities present in analog crossbars in terms of adversarial robustness. In this work, we show that the intrinsic hardware variations manifested through crossbar non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization. We evaluate resilience of state-of-the-art DNNs (VGG8 & VGG16 networks) using benchmark datasets (CIFAR-10 & CIFAR-100) across various crossbar sizes towards both hardware and software adversarial attacks. We find that crossbar non-idealities unleash greater adversarial robustness (>10-20%) in DNNs than baseline software DNNs. We further assess the performance of our approach with other state-of-the-art efficiency-driven adversarial defenses and find that our approach performs significantly well in terms of reducing adversarial losses.

READ FULL TEXT

page 6

page 9

research
05/09/2021

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

With a growing need to enable intelligence in embedded devices in the In...
research
09/19/2021

On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars

Applications based on Deep Neural Networks (DNNs) have grown exponential...
research
02/15/2023

XploreNAS: Explore Adversarially Robust Hardware-efficient Neural Architectures for Non-ideal Xbars

Compute In-Memory platforms such as memristive crossbars are gaining foc...
research
04/22/2020

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
06/09/2021

HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks

Deep Neural Networks (DNNs) are employed in an increasing number of appl...
research
08/27/2020

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...
research
11/27/2019

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...

Please sign up or login with your details

Forgot password? Click here to reset