Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation

04/04/2021
by   Emilio Parisotto, et al.
0

Many real-world applications such as robotics provide hard constraints on power and compute that limit the viable model complexity of Reinforcement Learning (RL) agents. Similarly, in many distributed RL settings, acting is done on un-accelerated hardware such as CPUs, which likewise restricts model size to prevent intractable experiment run times. These "actor-latency" constrained settings present a major obstruction to the scaling up of model complexity that has recently been extremely successful in supervised learning. To be able to utilize large model capacity while still operating within the limits imposed by the system during acting, we develop an "Actor-Learner Distillation" (ALD) procedure that leverages a continual form of distillation that transfers learning progress from a large capacity learner model to a small capacity actor model. As a case study, we develop this procedure in the context of partially-observable environments, where transformer models have had large improvements over LSTMs recently, at the cost of significantly higher computational complexity. With transformer models as the learner and LSTMs as the actor, we demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model while maintaining the fast inference and reduced total training time of the LSTM actor model.

READ FULL TEXT
research
10/26/2021

A DPDK-Based Acceleration Method for Experience Sampling of Distributed Reinforcement Learning

A computing cluster that interconnects multiple compute nodes is used to...
research
06/09/2019

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Multi-simulator training has contributed to the recent success of Deep R...
research
05/29/2021

On the Theory of Reinforcement Learning with Once-per-Episode Feedback

We study a theory of reinforcement learning (RL) in which the learner re...
research
10/13/2019

Stabilizing Transformers for Reinforcement Learning

Owing to their ability to both effectively integrate information over lo...
research
01/23/2019

Distillation Strategies for Proximal Policy Optimization

Vision-based deep reinforcement learning (RL), similar to deep learning,...
research
05/27/2022

FedFormer: Contextual Federation with Attention in Reinforcement Learning

A core issue in federated reinforcement learning is defining how to aggr...
research
11/28/2022

AcceRL: Policy Acceleration Framework for Deep Reinforcement Learning

Deep reinforcement learning has achieved great success in various fields...

Please sign up or login with your details

Forgot password? Click here to reset