In-memory Implementation of On-chip Trainable and Scalable ANN for AI/ML Applications

05/19/2020
by   Abhash Kumar, et al.
10

Traditional von Neumann architecture based processors become inefficient in terms of energy and throughput as they involve separate processing and memory units, also known as memory wall. The memory wall problem is further exacerbated when massive parallelism and frequent data movement are required between processing and memory units for real-time implementation of artificial neural network (ANN) that enables many intelligent applications. One of the most promising approach to address the memory wall problem is to carry out computations inside the memory core itself that enhances the memory bandwidth and energy efficiency for extensive computations. This paper presents an in-memory computing architecture for ANN enabling artificial intelligence (AI) and machine learning (ML) applications. The proposed architecture utilizes deep in-memory architecture based on standard six transistor (6T) static random access memory (SRAM) core for the implementation of a multi-layered perceptron. Our novel on-chip training and inference in-memory architecture reduces energy cost and enhances throughput by simultaneously accessing the multiple rows of SRAM array per precharge cycle and eliminating the frequent access of data. The proposed architecture realizes backpropagation which is the keystone during the network training using newly proposed different building blocks such as weight updation, analog multiplication, error calculation, signed analog to digital conversion, and other necessary signal control units. The proposed architecture was trained and tested on the IRIS dataset which exhibits ≈46× more energy efficient per MAC (multiply and accumulate) operation compared to earlier classifiers.

READ FULL TEXT
research
09/28/2020

Breaking the Memory Wall for AI Chip with a New Dimension

Recent advancements in deep learning have led to the widespread adoption...
research
10/12/2017

NeuroTrainer: An Intelligent Memory Module for Deep Learning Training

This paper presents, NeuroTrainer, an intelligent memory module with in-...
research
11/12/2021

Monolithic Silicon Photonic Architecture for Training Deep Neural Networks with Direct Feedback Alignment

The field of artificial intelligence (AI) has witnessed tremendous growt...
research
04/13/2021

An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory

In this paper we present a synaptic array that uses dynamical states to ...
research
09/13/2022

A Many-ported and Shared Memory Architecture for High-Performance ADAS SoCs

Increasing investment in computing technologies and the advancements in ...
research
03/18/2020

Thermodynamic Cost of Edge Detection in Artificial Neural Network(ANN)-Based Processors

Architecture-based heat dissipation analyses allows us to reveal fundame...
research
04/10/2020

SMART Paths for Latency Reduction in ReRAM Processing-In-Memory Architecture for CNN Inference

This research work proposes a design of an analog ReRAM-based PIM (proce...

Please sign up or login with your details

Forgot password? Click here to reset