AxoNN: An asynchronous, message-driven parallel framework for extreme-scale deep learning

10/25/2021
by   Siddharth Singh, et al.
0

In the last few years, the memory requirements to train state-of-the-art neural networks have far exceeded the DRAM capacities of modern hardware accelerators. This has necessitated the development of efficient algorithms to train these neural networks in parallel on large-scale GPU-based clusters. Since computation is relatively inexpensive on modern GPUs, designing and implementing extremely efficient communication in these parallel training algorithms is critical for extracting the maximum performance. This paper presents AxoNN, a parallel deep learning framework that exploits asynchrony and message-driven execution to schedule neural network operations on each GPU, thereby reducing GPU idle time and maximizing hardware efficiency. By using the CPU memory as a scratch space for offloading data periodically during training, AxoNN is able to reduce GPU memory consumption by four times. This allows us to increase the number of parameters per GPU by four times, thus reducing the amount of communication and increasing performance by over 13 against large transformer models with 12-100 billion parameters on 48-384 NVIDIA Tesla V100 GPUs, AxoNN achieves a per-GPU throughput of 49.4-54.78 theoretical peak and reduces the training time by 22-37 days (15-25 as compared to the state-of-the-art.

READ FULL TEXT
research
05/22/2023

Communication-minimizing Asynchronous Tensor Parallelism

As state-of-the-art neural networks scale to billions of parameters, des...
research
10/03/2018

Exascale Deep Learning for Climate Analytics

We extract pixel-level masks of extreme weather patterns using variants ...
research
11/09/2021

How to Train Your Neural Network: A Comparative Evaluation

The field of deep learning has witnessed a remarkable shift towards extr...
research
10/08/2021

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

Recent expeditious developments in deep learning algorithms, distributed...
research
09/23/2022

Faith: An Efficient Framework for Transformer Verification on GPUs

Transformer verification draws increasing attention in machine learning ...
research
10/27/2020

Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling

While large neural networks demonstrate higher performance in various ta...
research
11/16/2021

Project CGX: Scalable Deep Learning on Commodity GPUs

The ability to scale out training workloads has been one of the key perf...

Please sign up or login with your details

Forgot password? Click here to reset