Adaptive and Robust Multi-task Learning

02/10/2022
by   Yaqi Duan, et al.
0

We study the multi-task learning problem that aims to simultaneously analyze multiple datasets collected from different sources and learn one model for each of them. We propose a family of adaptive methods that automatically utilize possible similarities among those tasks while carefully handling their differences. We derive sharp statistical guarantees for the methods and prove their robustness against outlier tasks. Numerical experiments on synthetic and real datasets demonstrate the efficacy of our new methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2023

When is an SHM problem a Multi-Task-Learning problem?

Multi-task neural networks learn tasks simultaneously to improve individ...
research
10/22/2022

Adaptive Data Fusion for Multi-task Non-smooth Optimization

We study the problem of multi-task non-smooth optimization that arises u...
research
10/09/2021

Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm ...
research
02/21/2016

Multi-Task Learning with Labeled and Unlabeled Tasks

In multi-task learning, a learner is given a collection of prediction ta...
research
08/24/2023

Label Budget Allocation in Multi-Task Learning

The cost of labeling data often limits the performance of machine learni...
research
01/24/2019

Communication-Efficient and Decentralized Multi-Task Boosting while Learning the Collaboration Graph

We study the decentralized machine learning scenario where many users co...
research
03/31/2023

Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness

Representation multi-task learning (MTL) and transfer learning (TL) have...

Please sign up or login with your details

Forgot password? Click here to reset