Multitask Learning with No Regret: from Improved Confidence Bounds to Active Learning

08/03/2023
by   Pier Giuseppe Sessa, et al.
0

Multitask learning is a powerful framework that enables one to simultaneously learn multiple related tasks by sharing information between them. Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning. In this work, we provide novel multitask confidence intervals in the challenging agnostic setting, i.e., when neither the similarity between tasks nor the tasks' features are available to the learner. The obtained intervals do not require i.i.d. data and can be directly applied to bound the regret in online learning. Through a refined analysis of the multitask information gain, we obtain new regret guarantees that, depending on a task similarity parameter, can significantly improve over treating tasks independently. We further propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance, i.e., automatically adapting to task similarity. As a second key application of our results, we introduce a novel multitask active learning setup where several tasks must be simultaneously optimized, but only one of them can be queried for feedback by the learner at each round. For this problem, we design a no-regret algorithm that uses our confidence intervals to decide which task should be queried. Finally, we empirically validate our bounds and algorithms on synthetic and real-world (drug discovery) data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2022

AdaTask: Adaptive Multitask Online Learning

We introduce and analyze AdaTask, a multitask online learning algorithm ...
research
03/24/2021

Active Multitask Learning with Committees

The cost of annotating training data has traditionally been a bottleneck...
research
10/28/2021

Open Problem: Tight Online Confidence Intervals for RKHS Elements

Confidence intervals are a crucial building block in the analysis of var...
research
06/04/2021

Multitask Online Mirror Descent

We introduce and analyze MT-OMD, a multitask generalization of Online Mi...
research
04/12/2016

Confidence Decision Trees via Online and Active Learning for Streaming (BIG) Data

Decision tree classifiers are a widely used tool in data stream mining. ...
research
11/13/2018

Community Exploration: From Offline Optimization to Online Learning

We introduce the community exploration problem that has many real-world ...
research
03/21/2019

A Principled Approach for Learning Task Similarity in Multitask Learning

Multitask learning aims at solving a set of related tasks simultaneously...

Please sign up or login with your details

Forgot password? Click here to reset