SCOPE: Scalable Composite Optimization for Learning on Spark

01/30/2016
by   Shen-Yi Zhao, et al.
0

Many machine learning models, such as logistic regression (LR) and support vector machine (SVM), can be formulated as composite optimization problems. Recently, many distributed stochastic optimization (DSO) methods have been proposed to solve the large-scale composite optimization problems, which have shown better performance than traditional batch methods. However, most of these DSO methods are not scalable enough. In this paper, we propose a novel DSO method, called scalable composite optimization for learning (SCOPE), and implement it on the fault-tolerant distributed platform Spark. SCOPE is both computation-efficient and communication-efficient. Theoretical analysis shows that SCOPE is convergent with linear convergence rate when the objective function is convex. Furthermore, empirical results on real datasets show that SCOPE can outperform other state-of-the-art distributed learning methods on Spark, including both batch learning methods and DSO methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2018

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

Distributed sparse learning with a cluster of multiple machines has attr...
research
03/17/2022

Learning Distributionally Robust Models at Scale via Composite Optimization

To train machine learning models that are robust to distribution shifts ...
research
10/22/2019

Parallel Stochastic Optimization Framework for Large-Scale Non-Convex Stochastic Problems

In this paper, we consider the problem of stochastic optimization, where...
research
02/17/2019

Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization

How can we efficiently mitigate the overhead of gradient communications ...
research
09/05/2023

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates

This paper introduces PROMISE (Preconditioned Stochastic Optimization Me...
research
01/29/2019

Stochastic Conditional Gradient Method for Composite Convex Minimization

In this paper, we propose the first practical algorithm to minimize stoc...
research
01/13/2022

Consistent Approximations in Composite Optimization

Approximations of optimization problems arise in computational procedure...

Please sign up or login with your details

Forgot password? Click here to reset