LightNN: Filling the Gap between Conventional Deep Neural Networks and Binarized Networks

12/02/2017
by   Ruizhou Ding, et al.
0

Application-specific integrated circuit (ASIC) implementations for Deep Neural Networks (DNNs) have been adopted in many systems because of their higher classification speed. However, although they may be characterized by better accuracy, larger DNNs require significant energy and area, thereby limiting their wide adoption. The energy consumption of DNNs is driven by both memory accesses and computation. Binarized Neural Networks (BNNs), as a trade-off between accuracy and energy consumption, can achieve great energy reduction, and have good accuracy for large DNNs due to its regularization effect. However, BNNs show poor accuracy when a smaller DNN configuration is adopted. In this paper, we propose a new DNN model, LightNN, which replaces the multiplications to one shift or a constrained number of shifts and adds. For a fixed DNN configuration, LightNNs have better accuracy at a slight energy increase than BNNs, yet are more energy efficient with only slightly less accuracy than conventional DNNs. Therefore, LightNNs provide more options for hardware designers to make trade-offs between accuracy and energy. Moreover, for large DNN configurations, LightNNs have a regularization effect, making them better in accuracy than conventional DNNs. These conclusions are verified by experiment using the MNIST and CIFAR-10 datasets for different DNN configurations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2022

Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration

Deep Neural Networks (DNNs) are being heavily utilized in modern applica...
research
04/09/2019

Automated Search for Configurations of Deep Neural Network Architectures

Deep Neural Networks (DNNs) are intensively used to solve a wide variety...
research
05/20/2022

Deployment of Energy-Efficient Deep Learning Models on Cortex-M based Microcontrollers using Deep Compression

Large Deep Neural Networks (DNNs) are the backbone of today's artificial...
research
12/23/2019

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

Deep neural networks (DNNs) depend on the storage of a large number of p...
research
08/31/2018

Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars

Deep Neural Networks (DNNs) are widely used to perform machine learning ...
research
05/25/2023

Are We There Yet? Product Quantization and its Hardware Acceleration

Conventional multiply-accumulate (MAC) operations have long dominated co...
research
04/27/2021

Deep Learning of the Eddington Tensor in the Core-collapse Supernova Simulation

We trained deep neural networks (DNNs) as a function of the neutrino ene...

Please sign up or login with your details

Forgot password? Click here to reset