Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience

04/16/2019
by   Arman Roohi, et al.
0

Herein, a bit-wise Convolutional Neural Network (CNN) in-memory accelerator is implemented using Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) computational sub-arrays. It utilizes a novel AND-Accumulation method capable of significantly-reduced energy consumption within convolutional layers and performs various low bit-width CNN inference operations entirely within MRAM. Power-intermittence resiliency is also enhanced by retaining the partial state information needed to maintain computational forward-progress, which is advantageous for battery-less IoT nodes. Simulation results indicate ∼5.4× higher energy-efficiency and 9× speedup over ReRAM-based acceleration, or roughly ∼9.7× higher energy-efficiency and 13.5× speedup over recent CMOS-only approaches, while maintaining inference accuracy comparable to baseline designs.

READ FULL TEXT
research
10/13/2020

High Area/Energy Efficiency RRAM CNN Accelerator with Kernel-Reordering Weight Mapping Scheme Based on Pattern Pruning

Resistive Random Access Memory (RRAM) is an emerging device for processi...
research
01/19/2022

FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks

Convolutional Neural Networks (CNNs) demonstrate great performance in va...
research
07/06/2021

CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference

A compact, accurate, and bitwidth-programmable in-memory computing (IMC)...
research
12/21/2021

VW-SDK: Efficient Convolutional Weight Mapping Using Variable Windows for Processing-In-Memory Architectures

With their high energy efficiency, processing-in-memory (PIM) arrays are...
research
09/07/2023

Mapping of CNNs on multi-core RRAM-based CIM architectures

RRAM-based multi-core systems improve the energy efficiency and performa...
research
01/29/2020

Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks

The high energy cost of processing deep convolutional neural networks im...
research
04/10/2019

An Application-Specific VLIW Processor with Vector Instruction Set for CNN Acceleration

In recent years, neural networks have surpassed classical algorithms in ...

Please sign up or login with your details

Forgot password? Click here to reset