Check-N-Run: A Checkpointing System for Training Recommendation Models

10/17/2020
by   Assaf Eisenman, et al.
0

Checkpoints play an important role in training recommendation systems at scale. They are important for many use cases, including failure recovery to ensure rapid training progress, and online training to improve inference prediction accuracy. Checkpoints are typically written to remote, persistent storage. Given the typically large and ever-increasing recommendation model sizes, the checkpoint frequency and effectiveness is often bottlenecked by the storage write bandwidth and capacity, as well as the network bandwidth. We present Check-N-Run, a scalable checkpointing system for training large recommendation models. Check-N-Run uses two primary approaches to address these challenges. First, it applies incremental checkpointing, which tracks and checkpoints the modified part of the model. On top of that, it leverages quantization techniques to significantly reduce the checkpoint size, without degrading training accuracy. These techniques allow Check-N-Run to reduce the required write bandwidth by 6-17x and the required capacity by 2.5-8x on real-world models at Facebook, and thereby significantly improve checkpoint capabilities while reducing the total cost of ownership.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/20/2020

Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems

Large-scale training is important to ensure high performance and accurac...
05/04/2021

Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems

Deep learning recommendation systems at scale have provided remarkable g...
11/05/2020

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

The paper proposes and optimizes a partial recovery training system, CPR...
04/11/2022

A note on occur-check (extended report)

We weaken the notion of "not subject to occur-check" (NSTO), on which mo...
11/14/2018

Bandana: Using Non-volatile Memory for Storing Deep Learning Models

Typical large-scale recommender systems use deep learning models that ar...
08/20/2021

Understanding and Co-designing the Data Ingestion Pipeline for Industry-Scale RecSys Training

The data ingestion pipeline, responsible for storing and preprocessing t...
07/24/2019

A graphical heuristic for reduction and partitioning of large datasets for scalable supervised training

A scalable graphical method is presented for selecting, and partitioning...