Elastic CoCoA: Scaling In to Improve Convergence

11/06/2018
by   Michael Kaufmann, et al.
0

In this paper we experimentally analyze the convergence behavior of CoCoA and show, that the number of workers required to achieve the highest convergence rate at any point in time, changes over the course of the training. Based on this observation, we build Chicle, an elastic framework that dynamically adjusts the number of workers based on feedback from the training algorithm, in order to select the number of workers that results in the highest convergence rate. In our evaluation of 6 datasets, we show that Chicle is able to accelerate the time-to-accuracy by a factor of up to 5.96x compared to the best static setting, while being robust enough to find an optimal or near-optimal setting automatically in most cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2021

CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning

Federated learning (FL) is a prevailing distributed learning paradigm, w...
research
04/30/2020

Dynamic backup workers for parallel machine learning

The most popular framework for distributed training of machine learning ...
research
03/02/2011

Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization

We investigate the learning rate of multiple kernel leaning (MKL) with e...
research
05/09/2018

Self-Stabilizing Task Allocation In Spite of Noise

We study the problem of distributed task allocation inspired by the beha...
research
08/06/2019

Motivating Workers in Federated Learning: A Stackelberg Game Perspective

Due to the large size of the training data, distributed learning approac...
research
09/05/2019

Elastic_HH: Tailored Elastic for Finding Heavy Hitters

Finding heavy hitters has been of vital importance in network measuremen...

Please sign up or login with your details

Forgot password? Click here to reset