PIM-DRAM: Accelerating Machine Learning Workloads using Processing in Commodity DRAM

05/08/2021
by   Sourjya Roy, et al.
9

Deep Neural Networks (DNNs) have transformed the field of machine learning and are widely deployed in many applications involving image, video, speech and natural language processing. The increasing compute demands of DNNs have been widely addressed through Graphics Processing Units (GPUs) and specialized accelerators. However, as model sizes grow, these von Neumann architectures require very high memory bandwidth to keep the processing elements utilized as a majority of the data resides in the main memory. Processing in memory has been proposed as a promising solution for the memory wall bottleneck for ML workloads. In this work, we propose a new DRAM-based processing-in-memory (PIM) multiplication primitive coupled with intra-bank accumulation to accelerate matrix vector operations in ML workloads. The proposed multiplication primitive adds < 1 peripherals. Therefore, the proposed multiplication can be easily adopted in commodity DRAM chips. Subsequently, we design a DRAM-based PIM architecture, data mapping scheme and dataflow for executing DNNs within DRAM. System evaluations performed on networks like AlexNet, VGG16 and ResNet18 show that the proposed architecture, mapping, and data flow can provide up to 19.5x speedup over an NVIDIA Titan Xp GPU highlighting the need to overcome the memory bottleneck in future generations of DNN hardware.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 7

research
07/16/2022

MAC-DO: Charge Based Multi-Bit Analog In-Memory Accelerator Compatible with DRAM Using Output Stationary Mapping

Deep neural networks (DNN) have been proved for its effectiveness in var...
research
05/30/2023

NicePIM: Design Space Exploration for Processing-In-Memory DNN Accelerators with 3D-Stacked-DRAM

With the widespread use of deep neural networks(DNNs) in intelligent sys...
research
02/15/2021

GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent

In this paper, we present GradPIM, a processing-in-memory architecture w...
research
08/08/2023

Collaborative Acceleration for FFT on Commercial Processing-In-Memory Architectures

This paper evaluates the efficacy of recent commercial processing-in-mem...
research
06/03/2022

A Co-design view of Compute in-Memory with Non-Volatile Elements for Neural Networks

Deep Learning neural networks are pervasive, but traditional computer ar...
research
03/11/2021

MPU: Towards Bandwidth-abundant SIMT Processor via Near-bank Computing

With the growing number of data-intensive workloads, GPU, which is the s...
research
02/12/2023

AGNI: In-Situ, Iso-Latency Stochastic-to-Binary Number Conversion for In-DRAM Deep Learning

Recent years have seen a rapid increase in research activity in the fiel...

Please sign up or login with your details

Forgot password? Click here to reset