A Unified Analysis Method for Online Optimization in Normed Vector Space

12/22/2021
by   Qingxin Meng, et al.
6

We present a unified analysis method that relies on the generalized cosine rule and ϕ-convex for online optimization in normed vector space using dynamic regret as the performance metric. In combing the update rules, we start with strategy S (a two-parameter variant strategy covering Optimistic-FTRL with surrogate linearized losses), and obtain S-I (type-I relaxation variant form of S) and S-II (type-II relaxation variant form of S, which is Optimistic-MD) by relaxation. Regret bounds for S-I and S-II are the tightest possible. As instantiations, regret bounds of normalized exponentiated subgradient and greedy/lazy projection are better than the currently known optimal results. We extend online convex optimization to online monotone optimization, by replacing losses of online game with monotone operators and extending the definition of regret, namely regret^n, and expand the application scope of S-I and S-II.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2022

Online Frank-Wolfe with Unknown Delays

The online Frank-Wolfe (OFW) method has gained much popularity for onlin...
research
09/21/2022

Distributed Online Non-convex Optimization with Composite Regret

Regret has been widely adopted as the metric of choice for evaluating th...
research
09/08/2023

Online Submodular Maximization via Online Convex Optimization

We study monotone submodular maximization under general matroid constrai...
research
10/22/2020

Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

In online convex optimization (OCO), Lipschitz continuity of the functio...
research
05/30/2023

On Riemannian Projection-free Online Learning

The projection operation is a critical component in a wide range of opti...
research
03/21/2020

A new regret analysis for Adam-type algorithms

In this paper, we focus on a theory-practice gap for Adam and its varian...
research
02/15/2012

Mirror Descent Meets Fixed Share (and feels no regret)

Mirror descent with an entropic regularizer is known to achieve shifting...

Please sign up or login with your details

Forgot password? Click here to reset