-
Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization
We study online convex optimization in a setting where the learner seeks...
read it
-
Online Multiserver Convex Chasing and Optimization
We introduce the problem of k-chasing of convex functions, a simultaneou...
read it
-
Smoothed Online Optimization for Regression and Control
We consider Online Convex Optimization (OCO) in the setting where the co...
read it
-
Beyond No-Regret: Competitive Control via Online Optimization with Memory
This paper studies online control with adversarial disturbances using to...
read it
-
Online Optimization with Predictions and Non-convex Losses
We study online optimization in a setting where an online learner seeks ...
read it
-
Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms
We consider online convex optimization with time-varying stage costs and...
read it
-
Parameter-free Online Convex Optimization with Sub-Exponential Noise
We consider the problem of unconstrained online convex optimization (OCO...
read it
Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent
We study Smoothed Online Convex Optimization, a version of online convex optimization where the learner incurs a penalty for changing her actions between rounds. Given a known Ω(√(d)) lower bound on the competitive ratio of any online algorithm, where d is the dimension of the action space, we ask under what conditions this bound can be beaten. We introduce a novel algorithmic framework for this problem, Online Balanced Descent (OBD), which works by iteratively projecting the previous point onto a carefully chosen level set of the current cost function so as to balance the switching costs and hitting costs. We demonstrate the generality of the OBD framework by showing how, with different choices of "balance," OBD can improve upon state-of-the-art performance guarantees for both competitive ratio and regret; in particular, OBD is the first algorithm to achieve a dimension-free competitive ratio, 3 + O(1/α), for locally polyhedral costs, where α measures the "steepness" of the costs. We also prove bounds on the dynamic regret of OBD when the balance is performed in the dual space that are dimension-free and imply that OBD has sublinear static regret.
READ FULL TEXT
Comments
There are no comments yet.