DetectX – Adversarial Input Detection using Current Signatures in Memristive XBar Arrays

06/22/2021
by   Abhishek Moitra, et al.
0

Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for adversarial detection. These approaches are computationally intensive and vulnerable to adversarial attacks. To this end, we propose DetectX - a hardware friendly adversarial detection mechanism using hardware signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). We show that adversarial inputs have higher SoI compared to clean inputs. However, the difference is too small for reliable adversarial detection. Hence, we propose a dual-phase training methodology: Phase1 training is geared towards increasing the separation between clean and adversarial SoIs; Phase2 training improves the overall robustness against different strengths of adversarial attacks. For hardware-based adversarial detection, we implement the DetectX module using 32nm CMOS circuits and integrate it with a Neurosim-like analog crossbar architecture. We perform hardware evaluation of the Neurosim+DetectX system on the Neurosim platform using datasets-CIFAR10(VGG8), CIFAR100(VGG16) and TinyImagenet(ResNet18). Our experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works. Moreover, we achieve high detection performance (ROC-AUC > 0.95) for strong white-box and black-box attacks. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/DetectX

READ FULL TEXT

page 1

page 8

page 10

research
11/19/2021

Enhanced countering adversarial attacks via input denoising and feature restoring

Despite the fact that deep neural networks (DNNs) have achieved prominen...
research
02/09/2022

Adversarial Detection without Model Information

Most prior state-of-the-art adversarial detection works assume that the ...
research
09/16/2021

KATANA: Simple Post-Training Robustness Using Test Time Augmentations

Although Deep Neural Networks (DNNs) achieve excellent performance on ma...
research
07/01/2021

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

Spiking Neural Networks (SNNs), despite being energy-efficient when impl...
research
08/27/2020

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...
research
02/22/2020

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

Recent studies have shown that deep learning models are vulnerable to sp...
research
07/31/2022

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

DNNs are known to be vulnerable to so-called adversarial attacks that ma...

Please sign up or login with your details

Forgot password? Click here to reset