Practical Precoding via Asynchronous Stochastic Successive Convex Approximation

10/03/2020
by   Basil M. Idrees, et al.
0

We consider stochastic optimization of a smooth non-convex loss function with a convex non-smooth regularizer. In the online setting, where a single sample of the stochastic gradient of the loss is available at every iteration, the problem can be solved using the proximal stochastic gradient descent (SGD) algorithm and its variants. However in many problems, especially those arising in communications and signal processing, information beyond the stochastic gradient may be available thanks to the structure of the loss function. Such extra-gradient information is not used by SGD, but has been shown to be useful, for instance in the context of stochastic expectation-maximization, stochastic majorization-minimization, and stochastic successive convex approximation (SCA) approaches. By constructing a stochastic strongly convex surrogates of the loss function at every iteration, the stochastic SCA algorithms can exploit the structural properties of the loss function and achieve superior empirical performance as compared to the SGD. In this work, we take a closer look at the stochastic SCA algorithm and develop its asynchronous variant which can be used for resource allocation in wireless networks. While the stochastic SCA algorithm is known to converge asymptotically, its iteration complexity has not been well-studied, and is the focus of the current work. The insights obtained from the non-asymptotic analysis allow us to develop a more practical asynchronous variant of the stochastic SCA algorithm which allows the use of surrogates calculated in earlier iterations. We characterize precise bound on the maximum delay the algorithm can tolerate, while still achieving the same convergence rate. We apply the algorithm to the problem of linear precoding in wireless sensor networks, where it can be implemented at low complexity but is shown to perform well in practice.

READ FULL TEXT

page 6

page 7

page 8

page 11

page 15

page 16

page 17

page 21

research
05/24/2018

Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning

Understanding the convergence performance of asynchronous stochastic gra...
research
06/10/2013

Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

We consider the stochastic approximation problem where a convex function...
research
10/21/2019

Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD

Large scale machine learning is increasingly relying on distributed opti...
research
11/04/2020

Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

In this paper, we consider the problem of empirical risk minimization (E...
research
02/26/2020

Non-Asymptotic Bounds for Zeroth-Order Stochastic Optimization

We consider the problem of optimizing an objective function with and wit...
research
10/11/2020

Three-Dimensional Swarming Using Cyclic Stochastic Optimization

In this paper we simulate an ensemble of cooperating, mobile sensing age...
research
02/17/2022

Training neural networks using monotone variational inequality

Despite the vast empirical success of neural networks, theoretical under...

Please sign up or login with your details

Forgot password? Click here to reset