SliceOut: Training Transformers and CNNs faster while using less memory

07/21/2020
by   Pascal Notin, et al.
8

We demonstrate 10-40 EfficientNets, and Transformer models, with minimal to no loss in accuracy, using SliceOut—a new dropout scheme designed to take advantage of GPU memory layout. By dropping contiguous sets of units at random, our method preserves the regularization properties of dropout while allowing for more efficient low-level implementation, resulting in training speedups through (1) fast memory access and matrix multiplication of smaller tensors, and (2) memory savings by avoiding allocating memory to zero units in weight gradients and activations. Despite its simplicity, our method is highly effective. We demonstrate its efficacy at scale with Wide ResNets EfficientNets on CIFAR10/100 and ImageNet, as well as Transformers on the LM1B dataset. These speedups and memory savings in training can lead to CO_2 emissions reduction of up to 40

READ FULL TEXT

page 3

page 14

research
02/09/2015

Efficient batchwise dropout training using submatrices

Dropout is a popular technique for regularizing artificial neural networ...
research
08/15/2022

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Large language models have been widely adopted but require significant G...
research
05/23/2018

Approximate Random Dropout

The training phases of Deep neural network (DNN) consume enormous proces...
research
01/31/2022

Memory-Efficient Backpropagation through Large Linear Layers

In modern neural networks like Transformers, linear layers require signi...
research
02/02/2023

A Survey on Efficient Training of Transformers

Recent advances in Transformers have come with a huge requirement on com...
research
05/28/2022

Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers

Sparsely activated transformers, such as Mixture of Experts (MoE), have ...
research
06/22/2022

Answer Fast: Accelerating BERT on the Tensor Streaming Processor

Transformers have become a predominant machine learning workload, they a...

Please sign up or login with your details

Forgot password? Click here to reset