Online Submodular Maximization via Online Convex Optimization

09/08/2023
by   Tareq Si Salem, et al.
0

We study monotone submodular maximization under general matroid constraints in the online setting. We prove that online optimization of a large class of submodular functions, namely, weighted threshold potential functions, reduces to online convex optimization (OCO). This is precisely because functions in this class admit a concave relaxation; as a result, OCO policies, coupled with an appropriate rounding scheme, can be used to achieve sublinear regret in the combinatorial setting. We show that our reduction extends to many different versions of the online learning problem, including the dynamic regret, bandit, and optimistic-learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2019

Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback

In this paper, we propose three online algorithms for submodular maximis...
research
06/01/2019

Adaptive Online Learning for Gradient-Based Optimizers

As application demands for online convex optimization accelerate, the ne...
research
02/16/2018

Online Continuous Submodular Maximization

In this paper, we consider an online optimization process, where the obj...
research
02/22/2018

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

Online optimization has been a successful framework for solving large-sc...
research
12/22/2021

A Unified Analysis Method for Online Optimization in Normed Vector Space

We present a unified analysis method that relies on the generalized cosi...
research
05/11/2013

Learning Policies for Contextual Submodular Prediction

Many prediction domains, such as ad placement, recommendation, trajector...
research
02/12/2021

MetaGrad: Adaptation using Multiple Learning Rates in Online Learning

We provide a new adaptive method for online convex optimization, MetaGra...

Please sign up or login with your details

Forgot password? Click here to reset