Memory-based Optimization Methods for Model-Agnostic Meta-Learning

06/09/2021
by   Bokun Wang, et al.
0

Recently, model-agnostic meta-learning (MAML) has garnered tremendous attention. However, stochastic optimization of MAML is still immature. Existing algorithms for MAML are based on the “episode" idea by sampling a number of tasks and a number of data points for each sampled task at each iteration for updating the meta-model. However, they either do not necessarily guarantee convergence with a constant mini-batch size or require processing a larger number of tasks at every iteration, which is not viable for continual learning or cross-device federated learning where only a small number of tasks are available per-iteration or per-round. This paper addresses these issues by (i) proposing efficient memory-based stochastic algorithms for MAML with a diminishing convergence error, which only requires sampling a constant number of tasks and a constant number of examples per-task per-iteration; (ii) proposing communication-efficient distributed memory-based MAML algorithms for personalized federated learning in both the cross-device (w/ client sampling) and the cross-silo (w/o client sampling) settings. The key novelty of the proposed algorithms is to maintain an individual personalized model (aka memory) for each task besides the meta-model and only update them for the sampled tasks by a momentum method that incorporates historical updates at each iteration. The theoretical results significantly improve the optimization theory for MAML and the empirical results also corroborate the theory.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/21/2021

Compositional Federated Learning: Applications in Distributionally Robust Averaging and Meta Learning

In the paper, we propose an effective and efficient Compositional Federa...
03/08/2021

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...
02/05/2021

Federated Reconstruction: Partially Local Federated Learning

Personalization methods in federated learning aim to balance the benefit...
01/30/2021

On Data Efficiency of Meta-learning

Meta-learning has enabled learning statistical models that can be quickl...
10/15/2020

Provably Faster Algorithms for Bilevel Optimization and Applications to Meta-Learning

Bilevel optimization has arisen as a powerful tool for many machine lear...
05/29/2021

A Federated Learning Framework for Nonconvex-PL Minimax Problems

We consider a general class of nonconvex-PL minimax problems in the cros...
08/27/2019

On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms

In this paper, we study the convergence theory of a class of gradient-ba...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.