Online Parameter-Free Learning of Multiple Low Variance Tasks

07/11/2020
by   Giulia Denevi, et al.
0

We propose a method to learn a common bias vector for a growing sequence of low-variance tasks. Unlike state-of-the-art approaches, our method does not require tuning any hyper-parameter. Our approach is presented in the non-statistical setting and can be of two variants. The "aggressive" one updates the bias after each datapoint, the "lazy" one updates the bias only at the end of each task. We derive an across-tasks regret bound for the method. When compared to state-of-the-art approaches, the aggressive variant returns faster rates, the lazy one recovers standard rates, but with no need of tuning hyper-parameters. We then adapt the methods to the statistical setting: the aggressive variant becomes a multi-task learning method, the lazy one a meta-learning method. Experiments confirm the effectiveness of our methods in practice.

READ FULL TEXT
research
03/30/2021

Conditional Meta-Learning of Linear Representations

Standard meta-learning for representation learning aims to find a common...
research
04/14/2022

Leveraging convergence behavior to balance conflicting tasks in multi-task learning

Multi-Task Learning is a learning paradigm that uses correlated tasks to...
research
04/25/2019

Faster and More Accurate Learning with Meta Trace Adaptation

Learning speed and accuracy are of universal interest for reinforcement ...
research
08/25/2020

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning

Biased regularization and fine-tuning are two recent meta-learning appro...
research
10/26/2022

Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach

In this article, we propose a novel pessimism-based Bayesian learning me...
research
10/30/2021

One Step at a Time: Pros and Cons of Multi-Step Meta-Gradient Reinforcement Learning

Self-tuning algorithms that adapt the learning process online encourage ...
research
12/27/2021

Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies

Unrolled computation graphs arise in many scenarios, including training ...

Please sign up or login with your details

Forgot password? Click here to reset