DeepAI
Log In Sign Up

Distributed Deep Learning in Open Collaborations

06/18/2021
by   Michael Diskin, et al.
18

Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with 40 participants.

READ FULL TEXT
04/15/2021

How to Train BERT with an Academic Budget

While large language models à la BERT are used ubiquitously in NLP, pret...
02/10/2020

Learning@home: Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
07/11/2022

Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
09/09/2020

Time-Based Roofline for Deep Learning Performance Analysis

Deep learning applications are usually very compute-intensive and requir...
12/12/2019

EPIC: An Energy-Efficient, High-Performance GPGPU Computing Research Infrastructure

The pursuit of many research questions requires massive computational re...
03/08/2021

SCNN: Swarm Characteristic Neural Network

Deep learning is a powerful approach with good performance on many diffe...
07/07/2022

Training Transformers Together

The infrastructure necessary for training state-of-the-art models is bec...

Code Repositories

hivemind

Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.


view repo

DeDLOC

Official code for "Distributed Deep Learning in Open Collaborations"


view repo