PCA-based Multi Task Learning: a Random Matrix Approach

11/01/2021
by   Malik Tiomoko, et al.
0

The article proposes and theoretically analyses a computationally efficient multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes <cit.>. The analysis reveals that (i) by default learning may dramatically fail by suffering from negative transfer, but that (ii) simple counter-measures on data labels avert negative transfer and necessarily result in improved performances. Supporting experiments on synthetic and real data benchmarks show that the proposed method achieves comparable performance with state-of-the-art MTL methods but at a significantly reduced computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2021

Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm ...
research
09/03/2020

Large Dimensional Analysis and Improvement of Multi Task Learning

Multi Task Learning (MTL) efficiently leverages useful information conta...
research
11/26/2011

Learning a Factor Model via Regularized PCA

We consider the problem of learning a linear factor model. We propose a ...
research
07/07/2023

Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection

This paper proposes a novelty approach to mitigate the negative transfer...
research
10/23/2022

Principal Component Classification

We propose to directly compute classification estimates by learning feat...
research
02/21/2016

Multi-Task Learning with Labeled and Unlabeled Tasks

In multi-task learning, a learner is given a collection of prediction ta...

Please sign up or login with your details

Forgot password? Click here to reset