Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

09/15/2017
by   Tianchan Guan, et al.
0

This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for sensing devices having a limited amount of on-chip data storage such as < 100's kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of synaptic weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip and with a less number of off-chip storage accesses. This enables higher classification accuracy, shorter training time, less energy dissipation, and less on-chip storage requirement. We verified the training model with deep neural network classifiers and the permutation-invariant MNIST benchmark. Our model uses only 2.28 bits/weight while for the same data storage constraint achieving 1 binary-weight learning model which yet has to use 8 to 16 bit storage per weight. To achieve the similar classification error, the conventional binary model requires 4x more data storage for weights than the proposed model.

READ FULL TEXT
research
04/17/2019

MorphIC: A 65-nm 738k-Synapse/mm^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

Recent trends in the field of artificial neural networks (ANNs) and conv...
research
03/12/2020

A Power-Efficient Binary-Weight Spiking Neural Network Architecture for Real-Time Object Classification

Neural network hardware is considered an essential part of future edge d...
research
12/15/2017

Lightweight Neural Networks

Most of the weights in a Lightweight Neural Network have a value of zero...
research
12/11/2014

Feature Weight Tuning for Recursive Neural Networks

This paper addresses how a recursive neural network model can automatica...
research
02/21/2020

Efficient Learning of Model Weights via Changing Features During Training

In this paper, we propose a machine learning model, which dynamically ch...
research
02/13/2020

Classifying the classifier: dissecting the weight space of neural networks

This paper presents an empirical study on the weights of neural networks...
research
02/27/2016

Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

Multilayered artificial neural networks (ANN) have found widespread util...

Please sign up or login with your details

Forgot password? Click here to reset