DeepAI AI Chat
Log In Sign Up

High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training

09/13/2018
by   Borna Obradovic, et al.
SAMSUNG
0

Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic weight elements based on SRAM or novel Non-Volatile Memories (NVMs) provide a promising approach for highly efficient hardware representations of neural networks. NVM density and robustness requirements suggest that off-line training is the right choice for "edge" devices, since the requirements for synapse precision are much less stringent. However, off-line training using ideal mathematical weights and activations can result in significant loss of inference accuracy when applied to non-ideal hardware. Non-idealities such as multi-bit quantization of weights and activations, non-linearity of weights, finite max/min ratios of NVM elements, and asymmetry of positive and negative weight components all result in degraded inference accuracy. In this work, it is demonstrated that non-ideal Multi-Layer Perceptron (MLP) architectures using low bitwidth weights and activations can be trained with negligible loss of inference accuracy relative to their Floating Point-trained counterparts using a proposed off-line, continuously differentiable HW-aware training algorithm. The proposed algorithm is applicable to a wide range of hardware models, and uses only standard neural network training methods. The algorithm is demonstrated on the MNIST and EMNIST datasets, using standard MLPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/08/2021

VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference

Quantization enables efficient acceleration of deep neural networks by r...
02/21/2022

Variation Aware Training of Hybrid Precision Neural Networks with 28nm HKMG FeFET Based Synaptic Core

This work proposes a hybrid-precision neural network training framework ...
12/19/2019

FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference

Deep neural networks (DNNs) can be made hardware-efficient by reducing t...
03/16/2021

Training Dynamical Binary Neural Networks with Equilibrium Propagation

Equilibrium Propagation (EP) is an algorithm intrinsically adapted to th...
01/07/2017

Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses

Brain inspired neuromorphic computing has demonstrated remarkable advant...
10/14/2019

Variation-aware Binarized Memristive Networks

The quantization of weights to binary states in Deep Neural Networks (DN...
07/03/2021

Exact Backpropagation in Binary Weighted Networks with Group Weight Transformations

Quantization based model compression serves as high performing and fast ...