Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

08/18/2022
by   Qixin Zhang, et al.
0

Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per-function gradient evaluations and per-round communication complexity from T^3/2 to 1. The first one, One-shot Decentralized Meta-Frank-Wolfe (Mono-DMFW), achieves a (1-1/e)-regret bound of O(T^4/5). As far as we know, this is the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization. Next, inspired by the non-oblivious boosting function <cit.>, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a (1-1/e)-regret of O(√(T)). To the best of our knowledge, this is the first result to obtain the optimal O(√(T)) against a (1-1/e)-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset