Online Learning for Non-monotone Submodular Maximization: From Full Information to Bandit Feedback

08/16/2022
by   Qixin Zhang, et al.
0

In this paper, we revisit the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, which finds wide real-world applications in the domain of machine learning, economics, and operations research. At first, we present the Meta-MFW algorithm achieving a 1/e-regret of O(√(T)) at the cost of T^3/2 stochastic gradient evaluations per round. As far as we know, Meta-MFW is the first algorithm to obtain 1/e-regret of O(√(T)) for the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set. Furthermore, in sharp contrast with ODC algorithm <cit.>, Meta-MFW relies on the simple online linear oracle without discretization, lifting, or rounding operations. Considering the practical restrictions, we then propose the Mono-MFW algorithm, which reduces the per-function stochastic gradient evaluations from T^3/2 to 1 and achieves a 1/e-regret bound of O(T^4/5). Next, we extend Mono-MFW to the bandit setting and propose the Bandit-MFW algorithm which attains a 1/e-regret bound of O(T^8/9). To the best of our knowledge, Mono-MFW and Bandit-MFW are the first sublinear-regret algorithms to explore the one-shot and bandit setting for online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, respectively. Finally, we conduct numerical experiments on both synthetic and real-world datasets to verify the effectiveness of our methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2019

Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback

In this paper, we propose three online algorithms for submodular maximis...
research
08/18/2022

Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

Maximizing a monotone submodular function is a fundamental task in machi...
research
01/03/2022

Continuous Submodular Maximization: Boosting via Non-oblivious Function

In this paper, we revisit the constrained and stochastic continuous subm...
research
02/22/2018

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

Online optimization has been a successful framework for solving large-sc...
research
05/26/2023

A Unified Approach for Maximizing Continuous DR-submodular Functions

This paper presents a unified approach for maximizing continuous DR-subm...
research
07/13/2023

Continuous Non-monotone DR-submodular Maximization with Down-closed Convex Constraint

We investigate the continuous non-monotone DR-submodular maximization pr...
research
10/30/2022

One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization

Decentralized learning has been studied intensively in recent years moti...

Please sign up or login with your details

Forgot password? Click here to reset