Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning

02/01/2020
by   Zhang-Wei Hong, et al.
1

Off-policy ensemble reinforcement learning (RL) methods have demonstrated impressive results across a range of RL benchmark tasks. Recent works suggest that directly imitating experts' policies in a supervised manner before or during the course of training enables faster policy improvement for an RL agent. Motivated by these recent insights, we propose Periodic Intra-Ensemble Knowledge Distillation (PIEKD). PIEKD is a learning framework that uses an ensemble of policies to act in the environment while periodically sharing knowledge amongst policies in the ensemble through knowledge distillation. Our experiments demonstrate that PIEKD improves upon a state-of-the-art RL method in sample efficiency on several challenging MuJoCo benchmark tasks. Additionally, we perform ablation studies to better understand PIEKD.

READ FULL TEXT

page 1

page 4

page 6

research
09/12/2021

Federated Ensemble Model-based Reinforcement Learning

Federated learning (FL) is a privacy-preserving machine learning paradig...
research
05/29/2023

Privileged Knowledge Distillation for Sim-to-Real Policy Generalization

Reinforcement Learning (RL) has recently achieved remarkable success in ...
research
09/01/2021

Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation

The powerful learning ability of deep neural networks enables reinforcem...
research
03/27/2021

KnowRU: Knowledge Reusing via Knowledge Distillation in Multi-agent Reinforcement Learning

Recently, deep Reinforcement Learning (RL) algorithms have achieved dram...
research
12/17/2020

Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning

We formally study how Ensemble of deep learning models can improve test ...
research
05/19/2020

Privileged Information Dropout in Reinforcement Learning

Using privileged information during training can improve the sample effi...
research
12/09/2020

Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation

In reinforcement learning, domain randomisation is an increasingly popul...

Please sign up or login with your details

Forgot password? Click here to reset