Accelerating Distributed Online Meta-Learning via Multi-Agent Collaboration under Limited Communication

12/15/2020
by   Sen Lin, et al.
0

Online meta-learning is emerging as an enabling technique for achieving edge intelligence in the IoT ecosystem. Nevertheless, to learn a good meta-model for within-task fast adaptation, a single agent alone has to learn over many tasks, and this is the so-called 'cold-start' problem. Observing that in a multi-agent network the learning tasks across different agents often share some model similarity, we ask the following fundamental question: "Is it possible to accelerate the online meta-learning across agents via limited communication and if yes how much benefit can be achieved? " To answer this question, we propose a multi-agent online meta-learning framework and cast it as an equivalent two-level nested online convex optimization (OCO) problem. By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication, which however is not well understood. To tackle this challenge, we devise a distributed online gradient descent algorithm with gradient tracking where each agent tracks the global gradient using only one communication step with its neighbors per iteration, and it results in an average regret O(√(T/N)) per agent, indicating that a factor of √(1/N) speedup over the optimal single-agent regret O(√(T)) after T iterations, where N is the number of agents. Building on this sharp performance speedup, we next develop a multi-agent online meta-learning algorithm and show that it can achieve the optimal task-average regret at a faster rate of O(1/√(NT)) via limited communication, compared to single-agent online meta-learning. Extensive experiments corroborate the theoretic results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2017

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonst...
research
10/06/2020

Dif-MAML: Decentralized Multi-Agent Meta-Learning

The objective of meta-learning is to exploit the knowledge obtained from...
research
11/23/2022

Stackelberg Meta-Learning for Strategic Guidance in Multi-Robot Trajectory Planning

Guided cooperation is a common task in many multi-agent teaming applicat...
research
08/18/2022

Meta-Learning Online Control for Linear Dynamical Systems

In this paper, we consider the problem of finding a meta-learning online...
research
05/31/2021

Energy-Efficient and Federated Meta-Learning via Projected Stochastic Gradient Ascent

In this paper, we propose an energy-efficient federated meta-learning fr...
research
06/22/2022

Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks

Bilevel optimization have gained growing interests, with numerous applic...
research
12/03/2019

BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent Communication)

In this work, we propose a novel memory-based multi-agent meta-learning ...

Please sign up or login with your details

Forgot password? Click here to reset