DeepAI AI Chat
Log In Sign Up

Trace norm regularization for multi-task learning with scarce data

by   Etienne Boursier, et al.

Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.


page 1

page 2

page 3

page 4


Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...

Multi-task Representation Learning with Stochastic Linear Bandits

We study the problem of transfer-learning in the setting of stochastic l...

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared repr...

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...

Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

Meta-learning is critical for a variety of practical ML systems – like p...

Multi-task additive models with shared transfer functions based on dictionary learning

Additive models form a widely popular class of regression models which r...

Trace-based cryptoanalysis of cyclotomic PLWE for the non-split case

We provide an attack against the decision version of PLWE over the cyclo...