Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

08/25/2020
by   Abhiroop Bhattacharjee, et al.
9

Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks. With a growing need to enable intelligence in embedded devices in this Internet of Things (IoT) era, secure hardware implementation of DNNs has become imperative. Memristive crossbars, being able to perform Matrix-Vector-Multiplications (MVMs) efficiently, are used to realize DNNs on hardware. However, crossbar non-idealities have always been devalued since they cause errors in performing MVMs, leading to degradation in the accuracy of the DNNs. Several software-based adversarial defenses have been proposed in the past to make DNNs adversarially robust. However, no previous work has demonstrated the advantage conferred by the non-idealities present in analog crossbars in terms of adversarial robustness. In this work, we show that the intrinsic hardware variations manifested through crossbar non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization. We evaluate resilience of state-of-the-art DNNs (VGG8 & VGG16 networks) using benchmark datasets (CIFAR-10 & CIFAR-100) across various crossbar sizes towards both hardware and software adversarial attacks. We find that crossbar non-idealities unleash greater adversarial robustness (>10-20%) in DNNs than baseline software DNNs. We further assess the performance of our approach with other state-of-the-art efficiency-driven adversarial defenses and find that our approach performs significantly well in terms of reducing adversarial losses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset