Training Modern Deep Neural Networks for Memory-Fault Robustness

11/23/2019
by   Ghouthi Boukli Hacene, et al.
0

Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging. In this paper, we investigate the solution of reducing the supply voltage of the memories used in the system, which results in bit-cell faults. We explore the robustness of state-of-the-art DNN architectures towards such defects and propose a regularizer meant to mitigate their effects on accuracy. Our experiments clearly demonstrate the interest of operating the system in a faulty regime to save energy without reducing accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset