MIND: In-Network Memory Management for Disaggregated Data Centers

07/01/2021
by   Seung-seob Lee, et al.
0

Memory-compute disaggregation promises transparent elasticity, high utilization and balanced usage for resources in data centers by physically separating memory and compute into network-attached resource "blades". However, existing designs achieve performance at the cost of resource elasticity, restricting memory sharing to a single compute blade to avoid costly memory coherence traffic over the network. In this work, we show that emerging programmable network switches can enable an efficient shared memory abstraction for disaggregated architectures by placing memory management logic in the network fabric. We find that centralizing memory management in the network permits bandwidth and latency-efficient realization of in-network cache coherence protocols, while programmable switch ASICs support other memory management logic at line-rate. We realize these insights into MIND, an in-network memory management unit for rack-scale memory disaggregation. MIND enables transparent resource elasticity while matching the performance of prior memory disaggregation proposals for real-world workloads.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2021

P4COM: In-Network Computation with Programmable Switches

Traditionally, switches only provide forwarding services and have no cre...
research
01/01/2023

DaeMon: Architectural Support for Efficient Data Movement in Disaggregated Systems

Resource disaggregation offers a cost effective solution to resource sca...
research
01/23/2023

Architectural Support for Efficient Data Movement in Disaggregated Systems

Resource disaggregation offers a cost effective solution to resource sca...
research
10/13/2020

PIUMA: Programmable Integrated Unified Memory Architecture

High performance large scale graph analytics is essential to timely anal...
research
07/08/2020

HALCONE : A Hardware-Level Timestamp-based Cache Coherence Scheme for Multi-GPU systems

While multi-GPU (MGPU) systems are extremely popular for compute-intensi...
research
04/23/2021

A Case for Fine-grain Coherence Specialization in Heterogeneous Systems

Hardware specialization is becoming a key enabler of energyefficient per...
research
12/21/2018

Computational RAM to Accelerate String Matching at Scale

Traditional Von Neumann computing is falling apart in the era of explodi...

Please sign up or login with your details

Forgot password? Click here to reset