AdaTask: Adaptive Multitask Online Learning

05/31/2022
by   Pierre Laforgue, et al.
0

We introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown structure of the tasks. When the N tasks are stochastically activated, we show that the regret of AdaTask is better, by a factor that can be as large as √(N), than the regret achieved by running N independent algorithms, one for each task. AdaTask can be seen as a comparator-adaptive version of Follow-the-Regularized-Leader with a Mahalanobis norm potential. Through a variational formulation of this potential, our analysis reveals how AdaTask jointly learns the tasks and their structure. Experiments supporting our findings are presented.

READ FULL TEXT
research
11/26/2012

The Interplay Between Stability and Regret in Online Learning

This paper considers the stability of online learning algorithms and its...
research
08/03/2023

Multitask Learning with No Regret: from Improved Confidence Bounds to Active Learning

Multitask learning is a powerful framework that enables one to simultane...
research
09/25/2018

Fully Implicit Online Learning

Regularized online learning is widely used in machine learning. In this ...
research
06/04/2021

Multitask Online Mirror Descent

We introduce and analyze MT-OMD, a multitask generalization of Online Mi...
research
02/06/2022

Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States

We revisit the classical online portfolio selection problem. It is widel...
research
02/13/2017

Multitask diffusion adaptation over networks with common latent representations

Online learning with streaming data in a distributed and collaborative m...
research
09/08/2017

A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds

Recently, much work has been done on extending the scope of online learn...

Please sign up or login with your details

Forgot password? Click here to reset