Online Continuous Submodular Maximization

02/16/2018
by   Lin Chen, et al.
0

In this paper, we consider an online optimization process, where the objective functions are not convex (nor concave) but instead belong to a broad class of continuous submodular functions. We first propose a variant of the Frank-Wolfe algorithm that has access to the full gradient of the objective functions. We show that it achieves a regret bound of O(√(T)) (where T is the horizon of the online optimization problem) against a (1-1/e)-approximation to the best feasible solution in hindsight. However, in many scenarios, only an unbiased estimate of the gradients are available. For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of O(√(T)) regret, albeit against a weaker 1/2-approximation to the best feasible solution in hindsight. We also generalize our results to γ-weakly submodular functions and prove the same sublinear regret bounds. Finally, we demonstrate the efficiency of our algorithms on a few problem instances, including non-convex/non-concave quadratic programs, multilinear extensions of submodular set functions, and D-optimal design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2018

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

Online optimization has been a successful framework for solving large-sc...
research
09/08/2023

Online Submodular Maximization via Online Convex Optimization

We study monotone submodular maximization under general matroid constrai...
research
06/19/2023

Online Dynamic Submodular Optimization

We propose new algorithms with provable performance for online binary op...
research
09/12/2021

Concave Utility Reinforcement Learning with Zero-Constraint Violations

We consider the problem of tabular infinite horizon concave utility rein...
research
01/03/2022

Continuous Submodular Maximization: Boosting via Non-oblivious Function

In this paper, we revisit the constrained and stochastic continuous subm...
research
12/18/2012

Variational Optimization

We discuss a general technique that can be used to form a differentiable...
research
11/05/2017

Stochastic Submodular Maximization: The Case of Coverage Functions

Stochastic optimization of continuous objectives is at the heart of mode...

Please sign up or login with your details

Forgot password? Click here to reset