Improved Dynamic Regret for Online Frank-Wolfe

02/11/2023
by   Yuanyu Wan, et al.
0

To deal with non-stationary online problems with complex constraints, we investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization. It is well-known that in the setting of offline optimization, the smoothness of functions and the strong convexity of functions accompanying specific properties of constraint sets can be utilized to achieve fast convergence rates for the Frank-Wolfe (FW) algorithm. However, for OFW, previous studies only establish a dynamic regret bound of O(√(T)(1+V_T+√(D_T))) by utilizing the convexity of problems, where T is the number of rounds, V_T is the function variation, and D_T is the gradient variation. In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization. The key technique for this extension is to set the step size of OFW with a line search rule. In this way, we first show that the dynamic regret bound of OFW can be improved to O(√(T(1+V_T))) for smooth functions. Second, we achieve a better dynamic regret bound of O((1+V_T)^2/3T^1/3) when functions are smooth and strongly convex, and the constraint set is strongly convex. Finally, for smooth and strongly convex functions with minimizers in the interior of the constraint set, we demonstrate that the dynamic regret of OFW reduces to O(1+V_T), and can be further strengthened to O(min{P_T^∗,S_T^∗,V_T}+1) by performing a constant number of FW iterations per round, where P_T^∗ and S_T^∗ denote the path length and squared path length of minimizers, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2020

Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

The regret bound of dynamic online learning algorithms is often expresse...
research
02/09/2023

Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization

Stochastically Extended Adversarial (SEA) model is introduced by Sachs e...
research
09/06/2019

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

Recursive least-squares algorithms often use forgetting factors as a heu...
research
06/10/2020

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

In this paper, we present an improved analysis for dynamic regret of str...
research
02/25/2022

Dynamic Regret of Online Mirror Descent for Relatively Smooth Convex Cost Functions

The performance of online convex optimization algorithms in a dynamic en...
research
11/22/2021

Dynamic Regret for Strongly Adaptive Methods and Optimality of Online KRR

We consider the framework of non-stationary Online Convex Optimization w...
research
10/05/2015

On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations

In this paper, the online variants of the classical Frank-Wolfe algorith...

Please sign up or login with your details

Forgot password? Click here to reset