Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

02/17/2018
by   Ashok Cutkosky, et al.
0

We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2019

Combining Online Learning Guarantees

We show how to take any two parameter-free online learning algorithms wi...
research
04/26/2019

Online Learning Algorithms for Quaternion ARMA Model

In this paper, we address the problem of adaptive learning for autoregre...
research
03/17/2016

Optimal Black-Box Reductions Between Optimization Objectives

The diverse world of machine learning applications has given rise to a p...
research
02/26/2022

Parameter-free Mirror Descent

We develop a modified online mirror descent framework that is suitable f...
research
02/27/2020

Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neith...
research
02/21/2018

The Many Faces of Exponential Weights in Online Learning

A standard introduction to online learning might place Online Gradient D...
research
05/22/2023

Hierarchical Partitioning Forecaster

In this work we consider a new family of algorithms for sequential predi...

Please sign up or login with your details

Forgot password? Click here to reset