Die-Stacked DRAM: Memory, Cache, or MemCache?

09/24/2018
by   Mohammad Bakhshalipour, et al.
0

Die-stacked DRAM is a promising solution for satisfying the ever-increasing memory bandwidth requirements of multi-core processors. Manufacturing technology has enabled stacking several gigabytes of DRAM modules on the active die, thereby providing orders of magnitude higher bandwidth as compared to the conventional DIMM-based DDR memories. Nevertheless, die-stacked DRAM, due to its limited capacity, cannot accommodate entire datasets of modern big-data applications. Therefore, prior proposals use it either as a sizable memory-side cache or as a part of the software-visible main memory. Cache designs can adapt themselves to the dynamic variations of applications but suffer from the tag storage/latency/bandwidth overhead. On the other hand, memory designs eliminate the need for tags, and hence, provide efficient access to data, but are unable to capture the dynamic behaviors of applications due to their static nature. In this work, we make a case for using the die-stacked DRAM partly as main memory and partly as a cache. We observe that in modern big-data applications there are many hot pages with a large number of accesses. Based on this observation, we propose to use a portion of the die-stacked DRAM as main memory to host hot pages, enabling serving a significant number of the accesses from the high-bandwidth DRAM without the overhead of tag-checking, and manage the rest of the DRAM as a cache, for capturing the dynamic behavior of applications. In this proposal, a software procedure pre-processes the application and determines hot pages, then asks the OS to map them to the memory portion of the die-stacked DRAM. The cache portion of the die-stacked DRAM is managed by hardware, caching data allocated in the off-chip memory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2017

Banshee: Bandwidth-Efficient DRAM Caching Via Software/Hardware Cooperation

Putting the DRAM on the same package with a processor enables several ti...
research
02/03/2020

TLB and Pagewalk Performance in Multicore Architectures with Large Die-Stacked DRAM Cache

In this work we study the overheads of virtual-to-physical address trans...
research
03/20/2020

A Migratory Near Memory Processing Architecture Applied to Big Data Problems

Servers produced by mainstream vendors are inefficient in processing Big...
research
08/26/2016

When to use 3D Die-Stacked Memory for Bandwidth-Constrained Big Data Workloads

Response time requirements for big data processing systems are shrinking...
research
06/03/2018

Gemini: Reducing DRAM Cache Hit Latency by Hybrid Mappings

Die-stacked DRAM caches are increasingly advocated to bridge the perform...
research
05/03/2023

NVMM cache design: Logging vs. Paging

Modern NVMM is closing the gap between DRAM and persistent storage, both...
research
07/04/2019

TicToc: Enabling Bandwidth-Efficient DRAM Caching for both Hits and Misses in Hybrid Memory Systems

This paper investigates bandwidth-efficient DRAM caching for hybrid DRAM...

Please sign up or login with your details

Forgot password? Click here to reset