On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms

08/27/2019
by   Alireza Fallah, et al.
0

In this paper, we study the convergence theory of a class of gradient-based Model-Agnostic Meta-Learning (MAML) methods and characterize their overall computational complexity as well as their best achievable level of accuracy in terms of gradient norm for nonconvex loss functions. In particular, we start with the MAML algorithm and its first order approximation (FO-MAML) and highlight the challenges that emerge in their analysis. By overcoming these challenges not only we provide the first theoretical guarantees for MAML and FO-MAML in nonconvex settings, but also we answer some of the unanswered questions for the implementation of these algorithms including how to choose their learning rate (stepsize) and the batch size for both tasks and datasets corresponding to tasks. In particular, we show that MAML can find an ϵ-first-order stationary point for any ϵ after at most O(1/ϵ^2) iterations while the cost of each iteration is O(d^2), where d is the problem dimension. We further show that FO-MAML reduces the cost per iteration of MAML to O(d), but, unlike MAML, its solution cannot reach any small desired level of accuracy. We further propose a new variant of the MAML algorithm called Hessian-free MAML (HF-MAML) which preserves all theoretical guarantees of MAML, while reducing its computational cost per iteration from O(d^2) to O(d).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/18/2020

Multi-Step Model-Agnostic Meta-Learning: Convergence and Improved Algorithms

As a popular meta-learning approach, the model-agnostic meta-learning (M...
02/12/2020

Distribution-Agnostic Model-Agnostic Meta-Learning

The Model-Agnostic Meta-Learning (MAML) algorithm <cit.> has been celebr...
03/07/2019

Convergence of Multi-Agent Learning with a Finite Step Size in General-Sum Games

Learning in a multi-agent system is challenging because agents are simul...
06/25/2020

Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets

Gradient-based meta-learning (GBML) with deep neural nets (DNNs) has bec...
06/23/2020

On the Global Optimality of Model-Agnostic Meta-Learning

Model-agnostic meta-learning (MAML) formulates meta-learning as a bileve...
06/23/2019

Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems

Stochastic approximation (SA) algorithms have been widely applied in min...
12/10/2020

Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation

We propose a new stochastic variance-reduced damped L-BFGS algorithm, wh...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.