Tell Me Something New: a new framework for asynchronous parallel learning

05/19/2018
by   Julaiti Alafate, et al.
0

We present a novel approach for parallel computation in the context of machine learning that we call "Tell Me Something New" (TMSN). This approach involves a set of independent workers that use broadcast to update each other when they observe "something new". TMSN does not require synchronization or a head node and is highly resilient against failing machines or laggards. We demonstrate the utility of TMSN by applying it to learning boosted trees. We show that our implementation is 10 times faster than XGBoost and LightGBM on the splice-site prediction problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2019

Heterogeneity-Aware Asynchronous Decentralized Training

Distributed deep learning training usually adopts All-Reduce as the sync...
research
10/13/2017

DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization

Machine learning with big data often involves large optimization models....
research
03/22/2019

Parallel Adaptive Sampling with almost no Synchronization

Approximation via sampling is a widespread technique whenever exact solu...
research
06/10/2020

Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization

Distributed optimization is vital in solving large-scale machine learnin...
research
09/10/2020

Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling

Using logical clauses to represent patterns, Tsetlin machines (TMs) have...
research
06/11/2021

Coded-InvNet for Resilient Prediction Serving Systems

Inspired by a new coded computation algorithm for invertible functions, ...
research
10/17/2019

Communication-Efficient Asynchronous Stochastic Frank-Wolfe over Nuclear-norm Balls

Large-scale machine learning training suffers from two prior challenges,...

Please sign up or login with your details

Forgot password? Click here to reset