Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA

08/26/2020
by   Mohamed Wahib, et al.
0

The dedicated memory of hardware accelerators can be insufficient to store all weights and/or intermediate states of large deep learning models. Although model parallelism is a viable approach to reduce the memory pressure issue, significant modification of the source code and considerations for algorithms are required. An alternative solution is to use out-of-core methods instead of, or in addition to, data parallelism. We propose a performance model based on the concurrency analysis of out-of-core training behavior, and derive a strategy that combines layer swapping and redundant recomputing. We achieve an average of 1.52x speedup in six different models over the state-of-the-art out-of-core methods. We also introduce the first method to solve the challenging problem of out-of-core multi-node training by carefully pipelining gradient exchanges and performing the parameter updates on the host. Our data parallel out-of-core solution can outperform complex hybrid model parallelism in training large models, e.g. Megatron-LM and Turning-NLG.

READ FULL TEXT

page 3

page 10

research
06/16/2020

Memory-Efficient Pipeline-Parallel DNN Training

Many state-of-the-art results in domains such as NLP and computer vision...
research
06/10/2022

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

Foundation models are becoming the dominant deep learning technologies. ...
research
03/08/2018

Accelerating a fluvial incision and landscape evolution model with parallelism

Solving inverse problems and achieving statistical rigour in landscape e...
research
02/26/2016

Alpaka - An Abstraction Library for Parallel Kernel Acceleration

Porting applications to new hardware or programming models is a tedious ...
research
07/25/2020

The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism

We present scalable hybrid-parallel algorithms for training large-scale ...
research
04/12/2021

An Efficient 2D Method for Training Super-Large Deep Learning Models

Huge neural network models have shown unprecedented performance in real-...
research
12/08/2022

Fast Parallel Bayesian Network Structure Learning

Bayesian networks (BNs) are a widely used graphical model in machine lea...

Please sign up or login with your details

Forgot password? Click here to reset