PCA-based Multi Task Learning: a Random Matrix Approach

11/01/2021
by   Malik Tiomoko, et al.
0

The article proposes and theoretically analyses a computationally efficient multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes <cit.>. The analysis reveals that (i) by default learning may dramatically fail by suffering from negative transfer, but that (ii) simple counter-measures on data labels avert negative transfer and necessarily result in improved performances. Supporting experiments on synthetic and real data benchmarks show that the proposed method achieves comparable performance with state-of-the-art MTL methods but at a significantly reduced computational cost.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/09/2021

Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm ...
09/03/2020

Large Dimensional Analysis and Improvement of Multi Task Learning

Multi Task Learning (MTL) efficiently leverages useful information conta...
11/26/2011

Learning a Factor Model via Regularized PCA

We consider the problem of learning a linear factor model. We propose a ...
11/29/2021

Learning Multiple Dense Prediction Tasks from Partially Annotated Data

Despite the recent advances in multi-task learning of dense prediction p...
02/14/2021

Distillation based Multi-task Learning: A Candidate Generation Model for Improving Reading Duration

In feeds recommendation, the first step is candidate generation. Most of...
02/21/2016

Multi-Task Learning with Labeled and Unlabeled Tasks

In multi-task learning, a learner is given a collection of prediction ta...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.