Parallel Stochastic Optimization Framework for Large-Scale Non-Convex Stochastic Problems

10/22/2019
by   Naeimeh Omidvar, et al.
0

In this paper, we consider the problem of stochastic optimization, where the objective function is in terms of the expectation of a (possibly non-convex) cost function that is parametrized by a random variable. While the convergence speed is critical for many emerging applications, most existing stochastic optimization methods suffer from slow convergence. Furthermore, the emerging technology of parallel computing has motivated an increasing demand for designing new stochastic optimization schemes that can handle parallel optimization for implementation in distributed systems. We propose a fast parallel stochastic optimization framework that can solve a large class of possibly non-convex stochastic optimization problems that may arise in applications with multi-agent systems. In the proposed method, each agent updates its control variable in parallel, by solving a convex quadratic subproblem independently. The convergence of the proposed method to the optimal solution for convex problems and to a stationary point for general non-convex problems is established. The proposed algorithm can be applied to solve a large class of optimization problems arising in important applications from various fields, such as machine learning and wireless networks. As a representative application of our proposed stochastic optimization framework, we focus on large-scale support vector machines and demonstrate how our algorithm can efficiently solve this problem, especially in modern applications with huge datasets. Using popular real-world datasets, we present experimental results to demonstrate the merits of our proposed framework by comparing its performance to the state-of-the-art in the literature. Numerical results show that the proposed method can significantly outperform the state-of-the-art methods in terms of the convergence speed while having the same or lower complexity and storage requirement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2018

Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization

This paper proposes a constrained stochastic successive convex approxima...
research
06/01/2019

Data-Pooling in Stochastic Optimization

Managing large-scale systems often involves simultaneously solving thous...
research
06/08/2020

A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning

We consider a distributionally robust formulation of stochastic optimiza...
research
02/13/2018

Superposition-Assisted Stochastic Optimization for Hawkes Processes

We consider the learning of multi-agent Hawkes processes, a model contai...
research
01/30/2016

SCOPE: Scalable Composite Optimization for Learning on Spark

Many machine learning models, such as logistic regression (LR) and suppo...
research
06/08/2021

Using a New Nonlinear Gradient Method for Solving Large Scale Convex Optimization Problems with an Application on Arabic Medical Text

Gradient methods have applications in multiple fields, including signal ...
research
03/11/2021

Stochastic Package Queries in Probabilistic Databases

We provide methods for in-database support of decision making under unce...

Please sign up or login with your details

Forgot password? Click here to reset