Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling

10/27/2020
by   Akio Hayakawa, et al.
21

While large neural networks demonstrate higher performance in various tasks, training large networks is difficult due to limitations on GPU memory size. We propose a novel out-of-core algorithm that enables faster training of extremely large-scale neural networks with sizes larger than allotted GPU memory. Under a given memory budget constraint, our scheduling algorithm locally adapts the timing of memory transfers according to memory usage of each function, which improves overlap between computation and memory transfers. Additionally, we apply virtual addressing technique, commonly performed in OS, to training of neural networks with out-of-core execution, which drastically reduces the amount of memory fragmentation caused by frequent memory transfers. With our proposed algorithm, we successfully train ResNet-50 with 1440 batch-size with keeping training speed at 55 physical memory. It also outperforms a previous state-of-the-art substantially, i.e. it trains a 1.55x larger network than state-of-the-art with faster execution. Moreover, we experimentally show that our approach is also scalable for various types of networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2021

How to Train Your Neural Network: A Comparative Evaluation

The field of deep learning has witnessed a remarkable shift towards extr...
research
10/25/2021

AxoNN: An asynchronous, message-driven parallel framework for extreme-scale deep learning

In the last few years, the memory requirements to train state-of-the-art...
research
10/24/2022

OLLA: Decreasing the Memory Usage of Neural Networks by Optimizing the Lifetime and Location of Arrays

The size of deep neural networks has grown exponentially in recent years...
research
10/12/2017

HyperENTM: Evolving Scalable Neural Turing Machines through HyperNEAT

Recent developments within memory-augmented neural networks have solved ...
research
05/24/2016

Hierarchical Memory Networks

Memory networks are neural networks with an explicit memory component th...
research
02/16/2021

C11Tester: A Race Detector for C/C++ Atomics Technical Report

Writing correct concurrent code that uses atomics under the C/C++ memory...
research
02/25/2021

An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

Deep learning applications are drastically progressing in seismic proces...

Please sign up or login with your details

Forgot password? Click here to reset