Exploiting the DRAM Microarchitecture to Increase Memory-Level Parallelism

05/04/2018
by   Yoongu Kim, et al.
0

This paper summarizes the idea of Subarray-Level Parallelism (SALP) in DRAM, which was published in ISCA 2012, and examines the work's significance and future potential. Modern DRAMs have multiple banks to serve multiple memory requests in parallel. However, when two requests go to the same bank, they have to be served serially, exacerbating the high latency of on-chip memory. Adding more banks to the system to mitigate this problem incurs high system cost. Our goal in this work is to achieve the benefits of increasing the number of banks with a low-cost approach. To this end, we propose three new mechanisms, SALP-1, SALP-2, and MASA (Multitude of Activated Subarrays), to reduce the serialization of different requests that go to the same bank. The key observation exploited by our mechanisms is that a modern DRAM bank is implemented as a collection of subarrays that operate largely independently while sharing few global peripheral structures. Our three proposed mechanisms mitigate the negative impact of bank serialization by overlapping different components of the bank access latencies of multiple requests that go to different subarrays within the same bank. SALP-1 requires no changes to the existing DRAM structure, and needs to only reinterpret some of the existing DRAM timing parameters. SALP-2 and MASA require only modest changes (< 0.15 structures, which are much less design constrained than the DRAM core. Our evaluations show that SALP-1, SALP-2 and MASA significantly improve performance for both single-core systems (7 averaged across a wide range of workloads. We also demonstrate that our mechanisms can be combined with application-aware memory request scheduling in multicore systems to further improve performance and fairness.

READ FULL TEXT

page 2

page 3

page 5

research
05/02/2018

Reducing DRAM Refresh Overheads with Refresh-Access Parallelism

This article summarizes the idea of "refresh-access parallelism," which ...
research
04/30/2018

High-Performance and Energy-Effcient Memory Scheduler Design for Heterogeneous Systems

When multiple processor cores (CPUs) and a GPU integrated together on th...
research
07/17/2019

CADS: Core-Aware Dynamic Scheduler for Multicore Memory Controllers

Memory controller scheduling is crucial in multicore processors, where D...
research
12/21/2017

Improving DRAM Performance by Parallelizing Refreshes with Accesses

Modern DRAM cells are periodically refreshed to prevent data loss due to...
research
01/23/2022

Cuckoo Trie: Exploiting Memory-Level Parallelism for Efficient DRAM Indexing

We present the Cuckoo Trie, a fast, memory-efficient ordered index struc...
research
08/21/2019

Enabling and Exploiting Partition-Level Parallelism (PALP) in Phase Change Memories

Phase-change memory (PCM) devices have multiple banks to serve memory re...
research
05/12/2023

Venice: Improving Solid-State Drive Parallelism at Low Cost via Conflict-Free Accesses

The performance and capacity of solid-state drives (SSDs) are continuous...

Please sign up or login with your details

Forgot password? Click here to reset