Online Continuous Submodular Maximization

02/16/2018
by   Lin Chen, et al.
0

In this paper, we consider an online optimization process, where the objective functions are not convex (nor concave) but instead belong to a broad class of continuous submodular functions. We first propose a variant of the Frank-Wolfe algorithm that has access to the full gradient of the objective functions. We show that it achieves a regret bound of O(√(T)) (where T is the horizon of the online optimization problem) against a (1-1/e)-approximation to the best feasible solution in hindsight. However, in many scenarios, only an unbiased estimate of the gradients are available. For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of O(√(T)) regret, albeit against a weaker 1/2-approximation to the best feasible solution in hindsight. We also generalize our results to γ-weakly submodular functions and prove the same sublinear regret bounds. Finally, we demonstrate the efficiency of our algorithms on a few problem instances, including non-convex/non-concave quadratic programs, multilinear extensions of submodular set functions, and D-optimal design.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset