Large Dimensional Analysis and Improvement of Multi Task Learning

09/03/2020
by   Malik Tiomoko, et al.
0

Multi Task Learning (MTL) efficiently leverages useful information contained in multiple related tasks to help improve the generalization performance of all tasks. This article conducts a large dimensional analysis of a simple but, as we shall see, extremely powerful when carefully tuned, Least Square Support Vector Machine (LSSVM) version of MTL, in the regime where the dimension p of the data and their number n grow large at the same rate. Under mild assumptions on the input data, the theoretical analysis of the MTL-LSSVM algorithm first reveals the "sufficient statistics" exploited by the algorithm and their interaction at work. These results demonstrate, as a striking consequence, that the standard approach to MTL-LSSVM is largely suboptimal, can lead to severe effects of negative transfer but that these impairments are easily corrected. These corrections are turned into an improved MTL-LSSVM algorithm which can only benefit from additional data, and the theoretical performance of which is also analyzed. As evidenced and theoretically sustained in numerous recent works, these large dimensional results are robust to broad ranges of data distributions, which our present experiments corroborate. Specifically, the article reports a systematically close behavior between theoretical and empirical performances on popular datasets, which is strongly suggestive of the applicability of the proposed carefully tuned MTL-LSSVM method to real data. This fine-tuning is fully based on the theoretical analysis and does not in particular require any cross validation procedure. Besides, the reported performances on real datasets almost systematically outperform much more elaborate and less intuitive state-of-the-art multi-task and transfer learning methods.

READ FULL TEXT
research
11/01/2021

PCA-based Multi Task Learning: a Random Matrix Approach

The article proposes and theoretically analyses a computationally effici...
research
06/22/2022

Multi-task twin support vector machine with Universum data

Multi-task learning (MTL) has emerged as a promising topic of machine le...
research
09/30/2022

Unsupervised Multi-task and Transfer Learning on Gaussian Mixture Models

Unsupervised learning has been widely used in many real-world applicatio...
research
07/06/2018

Multi-Task Learning with Incomplete Data for Healthcare

Multi-task learning is a type of transfer learning that trains multiple ...
research
10/09/2021

Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm ...
research
03/03/2021

Rotograd: Dynamic Gradient Homogenization for Multi-Task Learning

While multi-task learning (MTL) has been successfully applied in several...
research
03/31/2023

Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness

Representation multi-task learning (MTL) and transfer learning (TL) have...

Please sign up or login with your details

Forgot password? Click here to reset