Distributed Stochastic Optimization in Networks with Low Informational Exchange

07/30/2018
by   Wenjie Li, et al.
0

We consider a distributed stochastic optimization problem in networks with finite number of nodes. Each node adjusts its action to optimize the global utility of the network, which is defined as the sum of local utilities of all nodes. Gradient descent method is a common technique to solve the optimization problem, while the computation of the gradient may require much information exchange. In this paper, we consider that each node can only have a noisy numerical observation of its local utility, of which the closed-form expression is not available. This assumption is quite realistic, especially when the system is too complicated or constantly changing. Nodes may exchange the observation of their local utilities to estimate the global utility at each timeslot. We propose stochastic perturbation based distributed algorithms under the assumptions whether each node has collected local utilities of all or only part of the other nodes. We use tools from stochastic approximation to prove that both algorithms converge to the optimum. The convergence rate of the algorithms is also derived. Although the proposed algorithms can be applied to general optimization problems, we perform simulations considering power control in wireless networks and present numerical results to corroborate our claim.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset