Towards Understanding Generalization in Gradient-Based Meta-Learning

07/16/2019
by   Simon Guiroy, et al.
9

In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We experimentally demonstrate that as meta-training progresses, the meta-test solutions, obtained after adapting the meta-train solution of the model, to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization. Based on these observations, we propose a novel regularizer for MAML and provide experimental evidence for its effectiveness.

READ FULL TEXT
research
10/18/2018

Gradient Agreement as an Optimization Objective for Meta-Learning

This paper presents a novel optimization method for maximizing generaliz...
research
04/13/2020

Regularizing Meta-Learning via Gradient Dropout

With the growing attention on learning-to-learn new tasks using only a f...
research
10/19/2020

Meta-learning the Learning Trends Shared Across Tasks

Meta-learning stands for 'learning to learn' such that generalization to...
research
03/12/2023

RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning

Gradient-based meta-learning (GBML) algorithms are able to fast adapt to...
research
03/02/2022

Continuous-Time Meta-Learning with Forward Mode Differentiation

Drawing inspiration from gradient-based meta-learning methods with infin...
research
06/03/2022

Dynamic Kernel Selection for Improved Generalization and Memory Efficiency in Meta-learning

Gradient based meta-learning methods are prone to overfit on the meta-tr...
research
10/27/2020

Why Does MAML Outperform ERM? An Optimization Perspective

Model-Agnostic Meta-Learning (MAML) has demonstrated widespread success ...

Please sign up or login with your details

Forgot password? Click here to reset