A Time-domain Analog Weighted-sum Calculation Model for Extremely Low Power VLSI Implementation of Multi-layer Neural Networks

10/16/2018
by   Quan Wang, et al.
0

A time-domain analog weighted-sum calculation model is proposed based on an integrate-and-fire-type spiking neuron model. The proposed calculation model is applied to multi-layer feedforward networks, in which weighted summations with positive and negative weights are separately performed in each layer and summation results are then fed into the next layers without their subtraction operation. We also propose very large-scale integrated (VLSI) circuits to implement the proposed model. Unlike the conventional analog voltage or current mode circuits, the time-domain analog circuits use transient operation in charging/discharging processes to capacitors. Since the circuits can be designed without operational amplifiers, they can operate with extremely low power consumption. However, they have to use very high resistance devices on the order of GΩ. We designed a proof-of-concept (PoC) CMOS VLSI chip to verify weighted-sum operation with the same weights and evaluated it by post-layout circuit simulation using 250-nm fabrication technology. High resistance operation was realized by using the subthreshold operation region of MOS transistors. Simulation results showed that energy efficiency for the weighted-sum calculation was 290 TOPS/W, more than one order of magnitude higher than that in state-of-the-art digital AI processors, even though the minimum width of interconnection used in the PoC chip was several times larger than that in such digital processors. If state-of-the-art VLSI technology is used to implement the proposed model, an energy efficiency of more than 1,000 TOPS/W will be possible. For practical applications, development of emerging analog memory devices such as ferroelectric-gate FETs is necessary.

READ FULL TEXT
research
03/05/2017

A/D Converter Architectures for Energy-Efficient Vision Processor

AI applications have emerged in current world. Among AI applications, co...
research
06/23/2016

Precise deep neural network computation on imprecise low-power analog hardware

There is an urgent need for compact, fast, and power-efficient hardware ...
research
08/25/2022

CMOS-based area-and-power-efficient neuron and synapse circuits for time-domain analog spiking neural networks

Conventional neural structures tend to communicate through analog quanti...
research
06/23/2021

Prospects for Analog Circuits in Deep Networks

Operations typically used in machine learning al-gorithms (e.g. adds and...
research
10/28/2019

Comparing domain wall synapse with other Non Volatile Memory devices for on-chip learning in Analog Hardware Neural Network

Resistive Random Access Memory (RRAM) and Phase Change Memory (PCM) devi...
research
05/11/2022

Process, Bias and Temperature Scalable CMOS Analog Computing Circuits for Machine Learning

Analog computing is attractive compared to digital computing due to its ...
research
06/27/2022

A Coupled Neural Circuit Design for Guillain-Barre Syndrome

Guillain-Barre syndrome is a rare neurological condition in which the hu...

Please sign up or login with your details

Forgot password? Click here to reset