Methodologies, Workloads, and Tools for Processing-in-Memory: Enabling the Adoption of Data-Centric Architectures

05/29/2022
by   Geraldo F. Oliveira, et al.
0

The increasing prevalence and growing size of data in modern applications have led to high costs for computation in traditional processor-centric computing systems. Moving large volumes of data between memory devices (e.g., DRAM) and computing elements (e.g., CPUs, GPUs) across bandwidth-limited memory channels can consume more than 60 mitigate these costs, the processing-in-memory (PIM) paradigm moves computation closer to where the data resides, reducing (and in some cases eliminating) the need to move data between memory and the processor. There are two main approaches to PIM: (1) processing-near-memory (PnM), where PIM logic is added to the same die as memory or to the logic layer of 3D-stacked memory; and (2) processing-using-memory (PuM), which uses the operational principles of memory cells to perform computation. Many works from academia and industry have shown the benefits of PnM and PuM for a wide range of workloads from different domains. However, fully adopting PIM in commercial systems is still very challenging due to the lack of tools and system support for PIM architectures across the computer architecture stack, which includes: (i) workload characterization methodologies and benchmark suites targeting PIM architectures; (ii) frameworks that can facilitate the implementation of complex operations and algorithms using the underlying PIM primitives; (iii) compiler support and compiler optimizations targeting PIM architectures; (iv) operating system support for PIM-aware virtual memory, memory management, data allocation, and data mapping; and (v) efficient data coherence and consistency mechanisms. Our goal in this work is to provide tools and system support for PnM and PuM architectures, aiming to ease the adoption of PIM in current and future systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2019

Enabling Practical Processing in and near Memory for Data-Intensive Computing

Modern computing systems suffer from the dichotomy between computation o...
research
07/26/2019

A Workload and Programming Ease Driven Perspective of Processing-in-Memory

Many modern and emerging applications must process increasingly large vo...
research
06/30/2020

TDO-CIM: Transparent Detection and Offloading for Computation In-memory

Computation in-memory is a promising non-von Neumann approach aiming at ...
research
09/13/2021

Data-Centric and Data-Aware Frameworks for Fundamentally Efficient Data Handling in Modern Computing Systems

There is an explosive growth in the size of the input and/or intermediat...
research
09/21/2020

A Survey of Resource Management for Processing-in-Memory and Near-Memory Processing Architectures

Due to amount of data involved in emerging deep learning and big data ap...
research
04/28/2021

Continual Learning Approach for Improving the Data and Computation Mapping in Near-Memory Processing System

The resurgence of near-memory processing (NMP) with the advent of big da...
research
07/21/2021

Architecture-Specific Performance Optimization of Compute-Intensive FaaS Functions

FaaS allows an application to be decomposed into functions that are exec...

Please sign up or login with your details

Forgot password? Click here to reset