Distilling Policy Distillation

The transfer of knowledge from one policy to another is an important tool in Deep Reinforcement Learning. This process, referred to as distillation, has been used to great success, for example, by enhancing the optimisation of agents, leading to stronger performance faster, on harder domains [26, 32, 5, 8]. Despite the widespread use and conceptual simplicity of distillation, many different formulations are used in practice, and the subtle variations between them can often drastically change the performance and the resulting objective that is being optimised. In this work, we rigorously explore the entire landscape of policy distillation, comparing the motivations and strengths of each variant through theoretical and empirical analysis. Our results point to three distillation techniques, that are preferred depending on specifics of the task. Specifically a newly proposed expected entropy regularised distillation allows for quicker learning in a wide range of situations, while still guaranteeing convergence.

READ FULL TEXT
research
12/29/2019

Real-time Policy Distillation in Deep Reinforcement Learning

Policy distillation in deep reinforcement learning provides an effective...
research
10/05/2022

On Neural Consolidation for Transfer in Reinforcement Learning

Although transfer learning is considered to be a milestone in deep reinf...
research
08/14/2020

Defending Adversarial Attacks without Adversarial Attacks in Deep Reinforcement Learning

Many recent studies in deep reinforcement learning (DRL) have proposed t...
research
03/15/2019

Policy Distillation and Value Matching in Multiagent Reinforcement Learning

Multiagent reinforcement learning algorithms (MARL) have been demonstrat...
research
07/13/2017

Distral: Robust Multitask Reinforcement Learning

Most deep reinforcement learning algorithms are data inefficient in comp...
research
08/16/2021

Neural-to-Tree Policy Distillation with Policy Improvement Criterion

While deep reinforcement learning has achieved promising results in chal...
research
05/27/2023

Knowledge Distillation Performs Partial Variance Reduction

Knowledge distillation is a popular approach for enhancing the performan...

Please sign up or login with your details

Forgot password? Click here to reset