MTJ-Based Hardware Synapse Design for Quantized Deep Neural Networks

by   Tzofnat Greenberg Toledo, et al.

Quantized neural networks (QNNs) are being actively researched as a solution for the computational complexity and memory intensity of deep neural networks. This has sparked efforts to develop algorithms that support both inference and training with quantized weight and activation values without sacrificing accuracy. A recent example is the GXNOR framework for stochastic training of ternary and binary neural networks. In this paper, we introduce a novel hardware synapse circuit that uses magnetic tunnel junction (MTJ) devices to support the GXNOR training. Our solution enables processing near memory (PNM) of QNNs, therefore can further reduce the data movements from and into the memory. We simulated MTJ-based stochastic training of a TNN over the MNIST and SVHN datasets and achieved an accuracy of 98.61



page 1

page 2

page 3

page 4


Training Quantized Nets: A Deeper Understanding

Currently, deep neural networks are deployed on low-power portable devic...

A Survey on Methods and Theories of Quantized Neural Networks

Deep neural networks are the state-of-the-art methods for many real-worl...

Enabling Incremental Training with Forward Pass for Edge Devices

Deep Neural Networks (DNNs) are commonly deployed on end devices that ex...

Batch Normalization in Quantized Networks

Implementation of quantized neural networks on computing hardware leads ...

Transfer Learning with Sparse Associative Memories

In this paper, we introduce a novel layer designed to be used as the out...

Sigma Delta Quantized Networks

Deep neural networks can be obscenely wasteful. When processing video, a...

On Resource-Efficient Bayesian Network Classifiers and Deep Neural Networks

We present two methods to reduce the complexity of Bayesian network (BN)...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.