Failure Tolerant Training with Persistent Memory Disaggregation over CXL

01/14/2023
by   Miryeong Kwon, et al.
0

This paper proposes TRAININGCXL that can efficiently process large-scale recommendation datasets in the pool of disaggregated memory while making training fault tolerant with low overhead. To this end, i) we integrate persistent memory (PMEM) and GPU into a cache-coherent domain as Type-2. Enabling CXL allows PMEM to be directly placed in GPU's memory hierarchy, such that GPU can access PMEM without software intervention. TRAININGCXL introduces computing and checkpointing logic near the CXL controller, thereby training data and managing persistency in an active manner. Considering PMEM's vulnerability, ii) we utilize the unique characteristics of recommendation models and take the checkpointing overhead off the critical path of their training. Lastly, iii) TRAININGCXL employs an advanced checkpointing technique that relaxes the updating sequence of model parameters and embeddings across training batches. The evaluation shows that TRAININGCXL achieves 5.2x training performance improvement and 76 PMEM-based recommendation systems.

READ FULL TEXT

page 1

page 7

page 9

research
08/08/2022

A Frequency-aware Software Cache for Large Recommendation System Embeddings

Deep learning recommendation models (DLRMs) have been widely applied in ...
research
04/22/2019

Pangolin: A Fault-Tolerant Persistent Memory Programming Library

Non-volatile main memory (NVMM) allows programmers to build complex, per...
research
05/10/2022

Training Personalized Recommendation Systems from (GPU) Scratch: Look Forward not Backwards

Personalized recommendation models (RecSys) are one of the most popular ...
research
11/05/2020

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

The paper proposes and optimizes a partial recovery training system, CPR...
research
06/27/2021

Revamping Storage Class Memory With Hardware Automated Memory-Over-Storage Solution

Large persistent memories such as NVDIMM have been perceived as a disrup...
research
03/16/2022

ORCA: A Network and Architecture Co-design for Offloading us-scale Datacenter Applications

Responding to the "datacenter tax" and "killer microseconds" problems fo...
research
10/29/2022

Fast Efficient Fixed-Size Memory Pool: No Loops and No Overhead

In this paper, we examine a ready-to-use, robust, and computationally fa...

Please sign up or login with your details

Forgot password? Click here to reset