Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set of tasks to rapidly learn new tasks using very little data. Meta-learning of linear regression tasks, where the regressors lie in a low-dimensional subspace, is an extensively-studied fundamental problem in this domain. However, existing results either guarantee highly suboptimal estimation errors, or require Ω(d) samples per task (where d is the data dimensionality) thus providing little gain over separately learning each task. In this work, we study a simple alternating minimization method (MLLAM), which alternately learns the low-dimensional subspace and the regressors. We show that, for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only Ω(log d) samples per task. However, the number of samples required per task grows logarithmically with the number of tasks. To remedy this in the low-noise regime, we propose a novel task subset selection scheme that ensures the same strong statistical guarantee as MLLAM, even with bounded number of samples per task for arbitrarily large number of tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

Meta-learning is critical for a variety of practical ML systems – like p...
research
02/14/2022

Trace norm regularization for multi-task learning with scarce data

Multi-task learning leverages structural similarities between multiple t...
research
03/02/2023

A Meta-Learning Approach to Predicting Performance and Data Requirements

We propose an approach to estimate the number of samples required for a ...
research
01/15/2022

Sample Summary with Generative Encoding

With increasing sample sizes, all algorithms require longer run times th...
research
07/20/2023

Nonlinear Meta-Learning Can Guarantee Faster Rates

Many recent theoretical works on meta-learning aim to achieve guarantees...
research
10/27/2020

Why Does MAML Outperform ERM? An Optimization Perspective

Model-Agnostic Meta-Learning (MAML) has demonstrated widespread success ...
research
08/08/2023

Meta-Learning Operators to Optimality from Multi-Task Non-IID Data

A powerful concept behind much of the recent progress in machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset