FedFormer: Contextual Federation with Attention in Reinforcement Learning

05/27/2022
by   Liam Hebert, et al.
0

A core issue in federated reinforcement learning is defining how to aggregate insights from multiple agents into one. This is commonly done by taking the average of each participating agent's model weights into one common model (FedAvg). We instead propose FedFormer, a novel federation strategy that utilizes Transformer Attention to contextually aggregate embeddings from models originating from different learner agents. In so doing, we attentively weigh contributions of other agents with respect to the current agent's environment and learned relationships, thus providing more effective and efficient federation. We evaluate our methods on the Meta-World environment and find that our approach yields significant improvements over FedAvg and non-federated Soft Actor Critique single agent methods. Our results compared to Soft Actor Critique show that FedFormer performs better while still abiding by the privacy constraints of federated learning. In addition, we demonstrate nearly linear improvements in effectiveness with increased agent pools in certain tasks. This is contrasted by FedAvg, which fails to make noticeable improvements when scaled.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2019

Federated Reinforcement Learning

In reinforcement learning, building policies of high-quality is challeng...
research
06/21/2022

Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling

Since reinforcement learning algorithms are notoriously data-intensive, ...
research
05/14/2023

Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling

Federated learning (FL) has recently gained much attention due to its ef...
research
01/30/2022

DearFSAC: An Approach to Optimizing Unreliable Federated Learning via Deep Reinforcement Learning

In federated learning (FL), model aggregation has been widely adopted fo...
research
04/12/2019

Let's Play Again: Variability of Deep Reinforcement Learning Agents in Atari Environments

Reproducibility in reinforcement learning is challenging: uncontrolled s...
research
04/06/2019

Reinforcement Learning with Attention that Works: A Self-Supervised Approach

Attention models have had a significant positive impact on deep learning...
research
04/04/2021

Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation

Many real-world applications such as robotics provide hard constraints o...

Please sign up or login with your details

Forgot password? Click here to reset