Distributed Dual Coordinate Ascent with Imbalanced Data on a General Tree Network

08/28/2023
by   Myung Cho, et al.
0

In this paper, we investigate the impact of imbalanced data on the convergence of distributed dual coordinate ascent in a tree network for solving an empirical loss minimization problem in distributed machine learning. To address this issue, we propose a method called delayed generalized distributed dual coordinate ascent that takes into account the information of the imbalanced data, and provide the analysis of the proposed algorithm. Numerical experiments confirm the effectiveness of our proposed method in improving the convergence speed of distributed dual coordinate ascent in a tree network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2012

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Gradient Descent (SGD) has become popular for solving large s...
research
11/12/2012

Proximal Stochastic Dual Coordinate Ascent

We introduce a proximal version of dual coordinate ascent method. We dem...
research
05/12/2013

Accelerated Mini-Batch Stochastic Dual Coordinate Ascent

Stochastic dual coordinate ascent (SDCA) is an effective technique for s...
research
04/13/2016

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

In modern large-scale machine learning applications, the training data a...
research
10/31/2016

Optimization for Large-Scale Machine Learning with Distributed Features and Observations

As the size of modern data sets exceeds the disk and memory capacities o...
research
10/29/2018

Distributed Convex Optimization With Limited Communications

In this paper, a distributed convex optimization algorithm, termed distr...
research
10/28/2021

Decentralized Feature-Distributed Optimization for Generalized Linear Models

We consider the "all-for-one" decentralized learning problem for general...

Please sign up or login with your details

Forgot password? Click here to reset