Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

05/26/2021
by   Wenjie Li, et al.
0

We consider a distributed convex optimization problem in a network which is time-varying and not always strongly connected. The local cost function of each node is affected by some stochastic process. All nodes of the network collaborate to minimize the average of their local cost functions. The major challenge of our work is that the gradient of cost functions is supposed to be unavailable and has to be estimated only based on the numerical observation of cost functions. Such problem is known as zeroth-order stochastic convex optimization (ZOSCO). In this paper we take a first step towards the distributed optimization problem with a ZOSCO setting. The proposed algorithm contains two basic steps at each iteration: i) each unit updates a local variable according to a random perturbation based single point gradient estimator of its own local cost function; ii) each unit exchange its local variable with its direct neighbors and then perform a weighted average. In the situation where the cost function is smooth and strongly convex, our attainable optimization error is O(T^-1/2) after T iterations. This result is interesting as O(T^-1/2) is the optimal convergence rate in the ZOSCO problem. We have also investigate the optimization error with the general Lipschitz convex function, the result is O(T^-1/4).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset