MTrainS: Improving DLRM training efficiency using heterogeneous memories

04/19/2023
by   Hiwot Tadese Kassa, et al.
0

Recommendation models are very large, requiring terabytes (TB) of memory during training. In pursuit of better quality, the model size and complexity grow over time, which requires additional training data to avoid overfitting. This model growth demands a large number of resources in data centers. Hence, training efficiency is becoming considerably more important to keep the data center power demand manageable. In Deep Learning Recommendation Models (DLRM), sparse features capturing categorical inputs through embedding tables are the major contributors to model size and require high memory bandwidth. In this paper, we study the bandwidth requirement and locality of embedding tables in real-world deployed models. We observe that the bandwidth requirement is not uniform across different tables and that embedding tables show high temporal locality. We then design MTrainS, which leverages heterogeneous memory, including byte and block addressable Storage Class Memory for DLRM hierarchically. MTrainS allows for higher memory capacity per node and increases training efficiency by lowering the need to scale out to multiple hosts in memory capacity bound use cases. By optimizing the platform memory hierarchy, we reduce the number of nodes for training by 4-8X, saving power and cost of training while meeting our target training performance.

READ FULL TEXT
research
03/20/2020

Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems

Large-scale training is important to ensure high performance and accurac...
research
10/17/2020

Check-N-Run: A Checkpointing System for Training Recommendation Models

Checkpoints play an important role in training recommendation systems at...
research
01/25/2022

RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation

We propose RecShard, a fine-grained embedding table (EMB) partitioning a...
research
10/21/2021

Supporting Massive DLRM Inference Through Software Defined Memory

Deep Learning Recommendation Models (DLRM) are widespread, account for a...
research
07/16/2021

Look Ahead ORAM: Obfuscating Addresses in Recommendation Model Training

In the cloud computing era, data privacy is a critical concern. Memory a...
research
01/25/2021

TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models

The memory capacity of embedding tables in deep learning recommendation ...
research
02/18/2022

iMARS: An In-Memory-Computing Architecture for Recommendation Systems

Recommendation systems (RecSys) suggest items to users by predicting the...

Please sign up or login with your details

Forgot password? Click here to reset