A Memristive Neural Network Computing Engine using CMOS-Compatible Charge-Trap-Transistor (CTT)

09/19/2017
by   Yuan Du, et al.
0

A memristive neural network computing engine based on CMOS-compatible charge-trap transistor (CTT) is proposed in this paper. CTT devices are used as analog multipliers. Compared to digital multipliers, CTT-based analog multipliers show dramatic area and power reduction (>100x). The proposed memristive computing engine is composed of a scalable CTT multiplier array and energy efficient analog-digital interfaces. Through implementing the sequential analog fabric (SAF), the mixed-signal of the engine interfaces are simplified and hardware overhead remains constant regardless of the size of the array. A proof-of-concept 784 by 784 CTT computing engine is implemented using TSMC 28nm CMOS technology and occupied 0.68mm2. It achieves 69.9 TOPS with 500 MHz clock frequency and consumes 14.8 mW. As an example, we utilize this computing engine to address a classic pattern recognition problem-classifying handwritten digits on MNIST database - and obtained a performance comparable to state-of-the-art fully connected neural networks using 8-bit fixed-point resolution.

READ FULL TEXT

page 6

page 7

research
09/19/2017

An Analog Neural Network Computing Engine using CMOS-Compatible Charge-Trap-Transistor (CTT)

An analog neural network computing engine based on CMOS-compatible charg...
research
01/28/2022

Testable Array Multipliers for a Better Utilization of C-Testability and Bijectivity

This paper presents a design for test (DFT)architecture for fast and sca...
research
12/04/2020

A Single-Cycle MLP Classifier Using Analog MRAM-based Neurons and Synapses

In this paper, spin-orbit torque (SOT) magnetoresistive random-access me...
research
12/21/2020

A complete, parallel and autonomous photonic neural network in a semiconductor multimode laser

Neural networks are one of the disruptive computing concepts of our time...
research
06/21/2023

Synaptic metaplasticity with multi-level memristive devices

Deep learning has made remarkable progress in various tasks, surpassing ...
research
05/24/2021

An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN Inference on Mobile Devices

In this paper, we develop an in-memory analog computing (IMAC) architect...
research
07/24/2014

Trainable and Dynamic Computing: Error Backpropagation through Physical Media

Machine learning algorithms, and more in particular neural networks, arg...

Please sign up or login with your details

Forgot password? Click here to reset