Distributed Online Optimization with Long-Term Constraints

12/20/2019 ∙ by Deming Yuan, et al. ∙ 10

We consider distributed online convex optimization problems, where the distributed system consists of various computing units connected through a time-varying communication graph. In each time step, each computing unit selects a constrained vector, experiences a loss equal to an arbitrary convex function evaluated at this vector, and may communicate to its neighbors in the graph. The objective is to minimize the system-wide loss accumulated over time. We propose a decentralized algorithm with regret and cumulative constraint violation in O(T^max{c,1-c}) and O(T^1-c/2), respectively, for any c∈ (0,1), where T is the time horizon. When the loss functions are strongly convex, we establish improved regret and constraint violation upper bounds in O(log(T)) and O(√(Tlog(T))). These regret scalings match those obtained by state-of-the-art algorithms and fundamental limits in the corresponding centralized online optimization problem (for both convex and strongly convex loss functions). In the case of bandit feedback, the proposed algorithms achieve a regret and constraint violation in O(T^max{c,1-c/3 }) and O(T^1-c/2) for any c∈ (0,1). We numerically illustrate the performance of our algorithms for the particular case of distributed online regularized linear regression problems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.