DeepAI AI Chat
Log In Sign Up

Trace norm regularization for multi-task learning with scarce data

02/14/2022
by   Etienne Boursier, et al.
EPFL
0

Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/18/2021

Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...
02/21/2022

Multi-task Representation Learning with Stochastic Linear Bandits

We study the problem of transfer-learning in the setting of stochastic l...
03/07/2016

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared repr...
11/23/2021

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...
10/07/2022

Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

Meta-learning is critical for a variety of practical ML systems – like p...
05/19/2015

Multi-task additive models with shared transfer functions based on dictionary learning

Additive models form a widely popular class of regression models which r...
09/24/2022

Trace-based cryptoanalysis of cyclotomic PLWE for the non-split case

We provide an attack against the decision version of PLWE over the cyclo...