Fully Implicit Online Learning

09/25/2018
by   Chaobing Song, et al.
8

Regularized online learning is widely used in machine learning. In this paper we analyze a class of regularized online algorithm with both non-linearized losses and non-linearized regularizers, which we call fully implicit online learning (FIOL). It is shown that because of avoiding the error of linearization, an extra additive regret gain can be obtained for FIOL. Then we show that by exploring the structure of the loss and regularizer, each iteration of FIOL can be exactly solved with time comparable to its linearized version, even if no closed-form solution exists. Experiments validate the proposed approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2022

AdaTask: Adaptive Multitask Online Learning

We introduce and analyze AdaTask, a multitask online learning algorithm ...
research
11/26/2012

The Interplay Between Stability and Regret in Online Learning

This paper considers the stability of online learning algorithms and its...
research
02/08/2023

On Computable Online Learning

We initiate a study of computable online (c-online) learning, which we a...
research
10/10/2022

Do you pay for Privacy in Online learning?

Online learning, in the mistake bound model, is one of the most fundamen...
research
07/20/2021

Best-of-All-Worlds Bounds for Online Learning with Feedback Graphs

We study the online learning with feedback graphs framework introduced b...
research
05/04/2020

No-Regret Stateful Posted Pricing

In this paper, a rather general online problem called dynamic resource a...
research
02/10/2021

Characterizing the Online Learning Landscape: What and How People Learn Online

Hundreds of millions of people learn something new online every day. Sim...

Please sign up or login with your details

Forgot password? Click here to reset