NeuralFuse: Learning to Improve the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes

06/29/2023
by   Hao-Lun Sun, et al.
0

Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains a notable issue. Lowering the supply voltage is an effective strategy for reducing energy consumption. However, aggressively scaling down the supply voltage can lead to accuracy degradation due to random bit flips in static random access memory (SRAM) where model parameters are stored. To address this challenge, we introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes by learning input transformations to generate error-resistant data representations. NeuralFuse protects DNN accuracy in both nominal and low-voltage scenarios. Moreover, NeuralFuse is easy to implement and can be readily applied to DNNs with limited access, such as non-configurable hardware or remote access to cloud-based APIs. Experimental results demonstrate that, at a 1 24 is the first model-agnostic approach (i.e., no model retraining) to address low-voltage-induced bit errors. The source code is available at https://github.com/IBM/NeuralFuse.

READ FULL TEXT

page 7

page 24

page 25

page 26

page 27

page 28

research
06/13/2021

BoolNet: Minimizing The Energy Consumption of Binary Neural Networks

Recent works on Binary Neural Networks (BNNs) have made promising progre...
research
06/24/2020

Bit Error Robustness for Energy-Efficient DNN Accelerators

Deep neural network (DNN) accelerators received considerable attention i...
research
07/18/2023

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Analog In-Memory Computing (AIMC) is a promising approach to reduce the ...
research
11/23/2019

Training Modern Deep Neural Networks for Memory-Fault Robustness

Because deep neural networks (DNNs) rely on a large number of parameters...
research
02/28/2021

SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM

Spiking Neural Networks (SNNs) have the potential for achieving low ener...
research
05/09/2021

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

With a growing need to enable intelligence in embedded devices in the In...
research
06/26/2023

A Preference-aware Meta-optimization Framework for Personalized Vehicle Energy Consumption Estimation

Vehicle Energy Consumption (VEC) estimation aims to predict the total en...

Please sign up or login with your details

Forgot password? Click here to reset