MAML and ANIL Provably Learn Representations

02/07/2022
by   Liam Collins, et al.
0

Recent empirical evidence has driven conventional wisdom to believe that gradient-based meta-learning (GBML) methods perform well at few-shot learning because they learn an expressive data representation that is shared across tasks. However, the mechanics of GBML have remained largely mysterious from a theoretical perspective. In this paper, we prove that two well-known GBML methods, MAML and ANIL, as well as their first-order approximations, are capable of learning common representation among a set of given tasks. Specifically, in the well-known multi-task linear representation learning setting, they are able to recover the ground-truth representation at an exponentially fast rate. Moreover, our analysis illuminates that the driving force causing MAML and ANIL to recover the underlying representation is that they adapt the final layer of their model, which harnesses the underlying task diversity to improve the representation in all directions of interest. To the best of our knowledge, these are the first results to show that MAML and/or ANIL learn expressive representations and to rigorously explain why they do so.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Provable Meta-Learning of Linear Representations

Meta-learning, or learning-to-learn, seeks to design algorithms that can...
research
02/21/2020

Few-Shot Learning via Learning the Representation, Provably

This paper studies few-shot learning via representation learning, where ...
research
03/02/2023

Model agnostic methods meta-learn despite misspecifications

Due to its empirical success on few shot classification and reinforcemen...
research
03/30/2021

Conditional Meta-Learning of Linear Representations

Standard meta-learning for representation learning aims to find a common...
research
10/09/2019

Improving Password Guessing via Representation Learning

Learning useful representations from unstructured data is one of the cor...
research
05/30/2022

Meta Representation Learning with Contextual Linear Bandits

Meta-learning seeks to build algorithms that rapidly learn how to solve ...
research
05/31/2021

Representation Learning Beyond Linear Prediction Functions

Recent papers on the theory of representation learning has shown the imp...

Please sign up or login with your details

Forgot password? Click here to reset