How Fine-Tuning Allows for Effective Meta-Learning

05/05/2021
by   Kurtland Chua, et al.
4

Representation learning has been widely studied in the context of meta-learning, enabling rapid learning of new tasks through shared representations. Recent works such as MAML have explored using fine-tuning-based metrics, which measure the ease by which fine-tuning can achieve good performance, as proxies for obtaining representations. We present a theoretical framework for analyzing representations derived from a MAML-like algorithm, assuming the available tasks use approximately the same underlying representation. We then provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure. The upper bound applies to general function classes, which we demonstrate by instantiating the guarantees of our framework in the logistic regression and neural network settings. In contrast, we establish the existence of settings where any algorithm, using a representation trained with no consideration for task-specific fine-tuning, performs as well as a learner with no access to source tasks in the worst case. This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2021

A Closer Look at Prototype Classifier for Few-shot Image Classification

The prototypical network is a prototype classifier based on meta-learnin...
research
05/25/2022

Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-tuning

A recent family of techniques, dubbed as lightweight fine-tuning methods...
research
03/20/2020

Weighted Meta-Learning

Meta-learning leverages related source tasks to learn an initialization ...
research
02/10/2021

Transfer Reinforcement Learning across Homotopy Classes

The ability for robots to transfer their learned knowledge to new tasks ...
research
11/03/2020

Meta-learning Transferable Representations with a Single Target Domain

Recent works found that fine-tuning and joint training—two popular appro...
research
05/21/2018

Meta-learning with differentiable closed-form solvers

Adapting deep networks to new concepts from few examples is extremely ch...
research
08/13/2023

Dual Meta-Learning with Longitudinally Generalized Regularization for One-Shot Brain Tissue Segmentation Across the Human Lifespan

Brain tissue segmentation is essential for neuroscience and clinical stu...

Please sign up or login with your details

Forgot password? Click here to reset