ECRM: Efficient Fault Tolerance for Recommendation Model Training via Erasure Coding

04/05/2021
by   Kaige Liu, et al.
0

Deep-learning-based recommendation models (DLRMs) are widely deployed to serve personalized content to users. DLRMs are large in size due to their use of large embedding tables, and are trained by distributing the model across the memory of tens or hundreds of servers. Server failures are common in such large distributed systems and must be mitigated to enable training to progress. Checkpointing is the primary approach used for fault tolerance in these systems, but incurs significant training-time overhead both during normal operation and when recovering from failures. As these overheads increase with DLRM size, checkpointing is slated to become an even larger overhead for future DLRMs, which are expected to grow in size. This calls for rethinking fault tolerance in DLRM training. We present ECRM, a DLRM training system that achieves efficient fault tolerance using erasure coding. ECRM chooses which DLRM parameters to encode, correctly and efficiently updates parities, and enables training to proceed without any pauses, while maintaining consistency of the recovered parameters. We implement ECRM atop XDL, an open-source, industrial-scale DLRM training system. Compared to checkpointing, ECRM reduces training-time overhead for large DLRMs by up to 88 allows training to proceed during recovery. These results show the promise of erasure coding in imparting efficient fault tolerance to training current and future DLRMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2018

On the Performance and Convergence of Distributed Stream Processing via Approximate Fault Tolerance

Fault tolerance is critical for distributed stream processing systems, y...
research
02/13/2021

MATCH: An MPI Fault Tolerance Benchmark Suite

MPI has been ubiquitously deployed in flagship HPC systems aiming to acc...
research
02/13/2023

SWIFT: Expedited Failure Recovery for Large-scale DNN Training

As the size of deep learning models gets larger and larger, training tak...
research
03/12/2020

A Fault-Tolerance Shim for Serverless Computing

Serverless computing has grown in popularity in recent years, with an in...
research
09/15/2023

Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates

Oobleck enables resilient distributed training of large DNN models with ...
research
10/17/2018

Fault Tolerance in Iterative-Convergent Machine Learning

Machine learning (ML) training algorithms often possess an inherent self...
research
11/05/2020

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

The paper proposes and optimizes a partial recovery training system, CPR...

Please sign up or login with your details

Forgot password? Click here to reset