Dif-MAML: Decentralized Multi-Agent Meta-Learning

10/06/2020
by   Mert Kayaalp, et al.
0

The objective of meta-learning is to exploit the knowledge obtained from observed tasks to improve adaptation to unseen tasks. As such, meta-learners are able to generalize better when they are trained with a larger number of observed tasks and with a larger amount of data per task. Given the amount of resources that are needed, it is generally difficult to expect the tasks, their respective data, and the necessary computational capacity to be available at a single central location. It is more natural to encounter situations where these resources are spread across several agents connected by some graph topology. The formalism of meta-learning is actually well-suited to this decentralized setting, where the learner would be able to benefit from information and computational power spread across the agents. Motivated by this observation, in this work, we propose a cooperative fully-decentralized multi-agent meta-learning algorithm, referred to as Diffusion-based MAML or Dif-MAML. Decentralized optimization algorithms are superior to centralized implementations in terms of scalability, avoidance of communication bottlenecks, and privacy guarantees. The work provides a detailed theoretical analysis to show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML objective even in non-convex environments. Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.

READ FULL TEXT
research
12/15/2020

Accelerating Distributed Online Meta-Learning via Multi-Agent Collaboration under Limited Communication

Online meta-learning is emerging as an enabling technique for achieving ...
research
11/11/2022

Intrinsically-Motivated Goal-Conditioned Reinforcement Learning in Multi-Agent Environments

How can a population of reinforcement learning agents autonomously learn...
research
05/24/2018

Been There, Done That: Meta-Learning with Episodic Recall

Meta-learning agents excel at rapidly learning new tasks from open-ended...
research
10/30/2019

Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

Under appropriate cooperation protocols and parameter choices, fully dec...
research
10/21/2020

Meta-trained agents implement Bayes-optimal agents

Memory-based meta-learning is a powerful technique to build agents that ...
research
12/03/2019

BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent Communication)

In this work, we propose a novel memory-based multi-agent meta-learning ...
research
02/02/2023

Meta Learning in Decentralized Neural Networks: Towards More General AI

Meta-learning usually refers to a learning algorithm that learns from ot...

Please sign up or login with your details

Forgot password? Click here to reset