Adaptive Online Learning for Gradient-Based Optimizers

06/01/2019
by   Saeed Masoudian, et al.
0

As application demands for online convex optimization accelerate, the need for designing new methods that simultaneously cover a large class of convex functions and impose the lowest possible regret is highly rising. Known online optimization methods usually perform well only in specific settings, and their performance depends highly on the geometry of the decision space and cost functions. However, in practice, lack of such geometric information leads to confusion in using the appropriate algorithm. To address this issue, some adaptive methods have been proposed that focus on adaptively learning parameters such as step size, Lipschitz constant, and strong convexity coefficient, or on specific parametric families such as quadratic regularizers. In this work, we generalize these methods and propose a framework that competes with the best algorithm in a family of expert algorithms. Our framework includes many of the well-known adaptive methods including MetaGrad, MetaGrad+C, and Ader. We also introduce a second algorithm that computationally outperforms our first algorithm with at most a constant factor increase in regret. Finally, as a representative application of our proposed algorithm, we study the problem of learning the best regularizer from a family of regularizers for Online Mirror Descent. Empirically, we support our theoretical findings in the problem of learning the best regularizer on the simplex and l_2-ball in a multiclass learning problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2023

Online Submodular Maximization via Online Convex Optimization

We study monotone submodular maximization under general matroid constrai...
research
06/15/2017

Second-Order Kernel Online Convex Optimization with Adaptive Sketching

Kernel online convex optimization (KOCO) is a framework combining the ex...
research
04/18/2019

Semi-bandit Optimization in the Dispersed Setting

In this work, we study the problem of online optimization of piecewise L...
research
06/30/2019

Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret

We introduce an online convex optimization algorithm using projected sub...
research
01/08/2022

Lazy Lagrangians with Predictions for Online Learning

We consider the general problem of online convex optimization with time-...
research
01/25/2019

DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization

Adaptive gradient-based optimization methods such as ADAGRAD, RMSPROP, a...
research
10/18/2021

Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits

In the fixed budget thresholding bandit problem, an algorithm sequential...

Please sign up or login with your details

Forgot password? Click here to reset