On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars

09/19/2021
by   Deboleena Roy, et al.
6

Applications based on Deep Neural Networks (DNNs) have grown exponentially in the past decade. To match their increasing computational needs, several Non-Volatile Memory (NVM) crossbar-based accelerators have been proposed. Apart from improved energy efficiency and performance, these approximate hardware also possess intrinsic robustness for defense against Adversarial Attacks, which is an important security concern for DNNs. Prior works have focused on quantifying this intrinsic robustness for vanilla networks, that is DNNs trained on unperturbed inputs. However, adversarial training of DNNs is the benchmark technique for robustness, and sole reliance on intrinsic robustness of the hardware may not be sufficient. In this work, we explore the design of robust DNNs through the amalgamation of adversarial training and the intrinsic robustness offered by NVM crossbar-based analog hardware. First, we study the noise stability of such networks on unperturbed inputs and observe that internal activations of adversarially trained networks have lower Signal-to-Noise Ratio (SNR), and are sensitive to noise than vanilla networks. As a result, they suffer significantly higher performance degradation due to the non-ideal computations; on an average 2x accuracy drop. On the other hand, for adversarial images generated using Projected-Gradient-Descent (PGD) White-Box attacks, ResNet-10/20 adversarially trained on CIFAR-10/100 display a 5-10 attack epsilon (ϵ_attack, the degree of input perturbations) is greater than the epsilon of the adversarial training (ϵ_train). Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and ϵ_train to achieve optimum robustness and performance.

READ FULL TEXT

page 1

page 6

research
08/25/2020

Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

Deep Neural Networks (DNNs) have been shown to be prone to adversarial a...
research
08/27/2020

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...
research
02/15/2023

XploreNAS: Explore Adversarially Robust Hardware-efficient Neural Architectures for Non-ideal Xbars

Compute In-Memory platforms such as memristive crossbars are gaining foc...
research
05/09/2021

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

With a growing need to enable intelligence in embedded devices in the In...
research
03/14/2022

Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training

In clinics, doctors rely on electrocardiograms (ECGs) to assess severe c...
research
07/07/2020

Calibrated BatchNorm: Improving Robustness Against Noisy Weights in Neural Networks

Analog computing hardware has gradually received more attention by the r...
research
02/26/2022

Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations

While end-to-end training of Deep Neural Networks (DNNs) yields state of...

Please sign up or login with your details

Forgot password? Click here to reset