Online optimization of piecewise Lipschitz functions in changing environments

07/22/2019
by   Maria-Florina Balcan, et al.
0

In an online optimization problem we are required to choose a sequence of points from a fixed feasible subset of R^d. After each point is chosen, a function over the set is revealed and we accumulate payoff equal to the function value at the point. We consider the class of piecewise Lipschitz functions, which is the most general setting considered in the literature for the problem, and arises naturally in various combinatorial algorithm selection problems where utility functions can have sharp discontinuities. The usual performance metric of `static' regret minimizes the gap between the payoff accumulated and that of the best fixed point for the entire duration, and thus fails to capture changing environments. Shifting regret is a useful alternative, which allows for up to s environment shifts. In this work we provide an O(√(sdT T)+sT^1-β) regret bound for β-dispersed functions, where β roughly quantifies the rate at which discontinuities appear in the utility functions in expectation (typically β>1/2 in problems of practical interest). We show this bound is optimal up to sub-logarithmic factors. We further show how to improve the bounds when selecting from a small pool of experts. We empirically demonstrate a key application of our algorithms to online clustering problems, with 15-40 relative gain over static regret based algorithms on popular benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2016

Online Optimization of Smoothed Piecewise Constant Functions

We study online optimization of smoothed piecewise constant functions ov...
research
04/18/2019

Semi-bandit Optimization in the Dispersed Setting

In this work, we study the problem of online optimization of piecewise L...
research
10/22/2020

Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

In online convex optimization (OCO), Lipschitz continuity of the functio...
research
06/26/2019

Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions

To deal with changing environments, a new performance measure---adaptive...
research
06/15/2021

Improved Regret Bounds for Online Submodular Maximization

In this paper, we consider an online optimization problem over T rounds ...
research
08/24/2020

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: ...
research
06/06/2022

Efficient Minimax Optimal Global Optimization of Lipschitz Continuous Multivariate Functions

In this work, we propose an efficient minimax optimal global optimizatio...

Please sign up or login with your details

Forgot password? Click here to reset