Distributed Deep Learning in Open Collaborations

06/18/2021
by   Michael Diskin, et al.
18

Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with 40 participants.

READ FULL TEXT
research
04/15/2021

How to Train BERT with an Academic Budget

While large language models à la BERT are used ubiquitously in NLP, pret...
research
02/10/2020

Learning@home: Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
research
07/11/2022

Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
research
09/09/2020

Time-Based Roofline for Deep Learning Performance Analysis

Deep learning applications are usually very compute-intensive and requir...
research
12/12/2019

EPIC: An Energy-Efficient, High-Performance GPGPU Computing Research Infrastructure

The pursuit of many research questions requires massive computational re...
research
03/25/2023

Active Finetuning: Exploiting Annotation Budget in the Pretraining-Finetuning Paradigm

Given the large-scale data and the high annotation cost, pretraining-fin...
research
09/04/2023

NLLB-CLIP – train performant multilingual image retrieval model on a budget

Today, the exponential rise of large models developed by academic and in...

Please sign up or login with your details

Forgot password? Click here to reset