A Modern Primer on Processing in Memory

12/05/2020
by   Onur Mutlu, et al.
0

Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause performance, scalability and energy bottlenecks: (1) data access is a key bottleneck as many important applications are increasingly data-intensive, and memory bandwidth and energy do not scale well, (2) energy consumption is a key limiter in almost all computing platforms, especially server and mobile systems, (3) data movement, especially off-chip to on-chip, is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today. At the same time, conventional memory technology is facing many technology scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, proliferation of different main memory standards and chips, specialized for different purposes (e.g., graphics, low-power, high bandwidth, low latency), and the necessity of designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend. This chapter discusses recent research that aims to practically enable computation close to data, an approach we call processing-in-memory (PIM). PIM places computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D-stacked memory, or in the memory controllers), so that data movement between the computation units and memory is reduced or eliminated.

READ FULL TEXT

page 9

page 11

page 18

page 24

page 30

research
03/10/2019

Processing Data Where It Makes Sense: Enabling In-Memory Computation

Today's systems are overwhelmingly designed to move data to computation....
research
05/02/2019

Enabling Practical Processing in and near Memory for Data-Intensive Computing

Modern computing systems suffer from the dichotomy between computation o...
research
02/01/2018

Enabling the Adoption of Processing-in-Memory: Challenges, Mechanisms, Future Research Directions

Poor DRAM technology scaling over the course of many years has caused DR...
research
06/02/2022

Exploiting Near-Data Processing to Accelerate Time Series Analysis

Time series analysis is a key technique for extracting and predicting ev...
research
10/05/2020

NATSA: A Near-Data Processing Accelerator for Time Series Analysis

Time series analysis is a key technique for extracting and predicting ev...
research
07/23/2022

Big Memory Servers and Modern Approaches to Disk-Based Computation

The Big Memory solution is a new computing paradigm facilitated by commo...
research
05/10/2021

Efficient Error-Correcting-Code Mechanism for High-Throughput Memristive Processing-in-Memory

Inefficient data transfer between computation and memory inspired emergi...

Please sign up or login with your details

Forgot password? Click here to reset