Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

07/11/2022
by   Max Ryabinin, et al.
0

Many recent breakthroughs in deep learning were achieved by training increasingly larger models on massive datasets. However, training such models can be prohibitively expensive. For instance, the cluster used to train GPT-3 costs over $250 million. As a result, most researchers cannot afford to train state of the art models and contribute to their development. Hypothetically, a researcher could crowdsource the training of large neural networks with thousands of regular PCs provided by volunteers. The raw computing power of a hundred thousand $2500 desktops dwarfs that of a $250M server pod, but one cannot utilize that power efficiently with conventional distributed training methods. In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants. We analyze the performance, reliability, and architectural constraints of this paradigm and compare it against existing distributed training techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

Learning@home: Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
research
11/09/2021

How to Train Your Neural Network: A Comparative Evaluation

The field of deep learning has witnessed a remarkable shift towards extr...
research
07/08/2020

Distributed Training of Deep Learning Models: A Taxonomic Perspective

Distributed deep learning systems (DDLS) train deep neural network model...
research
06/18/2021

Distributed Deep Learning in Open Collaborations

Modern deep learning applications require increasingly more compute to t...
research
12/09/2022

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

Training large, deep neural networks to convergence can be prohibitively...
research
07/01/2021

Neural Network Training with Highly Incomplete Datasets

Neural network training and validation rely on the availability of large...
research
02/14/2019

Superposition of many models into one

We present a method for storing multiple models within a single set of p...

Please sign up or login with your details

Forgot password? Click here to reset