Log In Sign Up

Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators

by   David Stutz, et al.

Deep neural network (DNN) accelerators received considerable attention in recent years due to the potential to save energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights. Furthermore, DNN accelerators have been shown to be vulnerable to adversarial attacks on voltage controllers or individual bits. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly. This leads not only to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays, and achieves robustness against both targeted and untargeted bit-level attacks. Without losing more than 0.8 consumption on CIFAR10 by 20 Allowing up to 320 adversarial bit errors, AdvBET reduces test error from above 90


page 3

page 19


Bit Error Robustness for Energy-Efficient DNN Accelerators

Deep neural network (DNN) accelerators received considerable attention i...

MATIC: Adaptation and In-situ Canaries for Energy-Efficient Neural Network Acceleration

- The primary author has withdrawn this paper due to conflict of interes...

Power-Based Attacks on Spatial DNN Accelerators

With proliferation of DNN-based applications, the confidentiality of DNN...

Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators

Energy-efficient deep neural network (DNN) accelerators are prone to non...

MIMHD: Accurate and Efficient Hyperdimensional Inference Using Multi-Bit In-Memory Computing

Hyperdimensional Computing (HDC) is an emerging computational framework ...

BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC

Recently, accelerators for extremely quantized deep neural network (DNN)...

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

Emerging resistive random-access memory (ReRAM) has recently been intens...