Large scale distributed neural network training through online distillation

04/09/2018
by   Rohan Anil, et al.
1

Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing 6× 10^11 tokens and based on the Common Crawl repository of web data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2022

Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation

Methods for improving the efficiency of deep network training (i.e. the ...
research
12/21/2013

GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training

The ability to train large-scale neural networks has resulted in state-o...
research
05/28/2023

Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning

Methods for carefully selecting or generating a small set of training da...
research
01/29/2023

Pipe-BD: Pipelined Parallel Blockwise Distillation

Training large deep neural network models is highly challenging due to t...
research
05/05/2023

A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness

The aim of dataset distillation is to encode the rich features of an ori...
research
11/27/2018

Dataset Distillation

Model distillation aims to distill the knowledge of a complex model into...
research
04/06/2021

3D-to-2D Distillation for Indoor Scene Parsing

Indoor scene semantic parsing from RGB images is very challenging due to...

Please sign up or login with your details

Forgot password? Click here to reset