DeepAI AI Chat
Log In Sign Up

Micro Batch Streaming: Allowing the Training of DNN models Using a large batch size on Small Memory Systems

by   DoangJoo Synn, et al.

The size of the deep learning models has greatly increased over the past decade. Such models are difficult to train using a large batch size, because commodity machines do not have enough memory to accommodate both the model and a large data size. The batch size is one of the hyper-parameters used in the training model, and it is dependent on and is limited by the target machine memory capacity and it is dependent on the remaining memory after the model is uploaded. A smaller batch size usually results in performance degradation. This paper proposes a framework called Micro-Batch Streaming (MBS) to address this problem. This method helps deep learning models to train by providing a batch streaming algorithm that splits a batch into the appropriate size for the remaining memory size and streams them sequentially to the target machine. A loss normalization algorithm based on the gradient accumulation is used to maintain the performance. The purpose of our method is to allow deep learning models to train using mathematically determined optimal batch sizes that cannot fit into the memory of a target system.


Memory-efficient training with streaming dimensionality reduction

The movement of large quantities of data during the training of a Deep N...

Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks

Batch Normalization (BN) is a highly successful and widely used batch de...

An Empirical Model of Large-Batch Training

In an increasing number of domains it has been demonstrated that deep le...

One-element Batch Training by Moving Window

Several deep models, esp. the generative, compare the samples from two d...

Reducing BERT Pre-Training Time from 3 Days to 76 Minutes

Large-batch training is key to speeding up deep neural network training ...

μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching

NVIDIA cuDNN is a low-level library that provides GPU kernels frequently...

A Practical Incremental Method to Train Deep CTR Models

Deep learning models in recommender systems are usually trained in the b...