Unsupervised Reinforcement Learning in Multiple Environments

12/16/2021
by   Mirco Mutti, et al.
0

Several recent works have been dedicated to unsupervised reinforcement learning in a single environment, in which a policy is first pre-trained with unsupervised interactions, and then fine-tuned towards the optimal policy for several downstream supervised tasks defined over the same environment. Along this line, we address the problem of unsupervised reinforcement learning in a class of multiple environments, in which the policy is pre-trained with interactions from the whole class, and then fine-tuned for several tasks in any environment of the class. Notably, the problem is inherently multi-objective as we can trade off the pre-training objective between environments in many ways. In this work, we foster an exploration strategy that is sensitive to the most adverse cases within the class. Hence, we cast the exploration problem as the maximization of the mean of a critical percentile of the state visitation entropy induced by the exploration strategy over the class of environments. Then, we present a policy gradient algorithm, αMEPOL, to optimize the introduced objective through mediated interactions with the class. Finally, we empirically demonstrate the ability of the algorithm in learning to explore challenging classes of continuous environments and we show that reinforcement learning greatly benefits from the pre-trained exploration strategy w.r.t. learning from scratch.

READ FULL TEXT

page 6

page 22

page 23

research
12/13/2018

Revisiting Exploration-Conscious Reinforcement Learning

The objective of Reinforcement Learning is to learn an optimal policy by...
research
03/25/2022

Reinforcement Learning with Action-Free Pre-Training from Videos

Recent unsupervised pre-training methods have shown to be effective on l...
research
06/07/2023

Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards

Foundation models are first pre-trained on vast unsupervised datasets an...
research
05/17/2022

Efficient Unsupervised Sentence Compression by Fine-tuning Transformers with Reinforcement Learning

Sentence compression reduces the length of text by removing non-essentia...
research
02/07/2022

The Importance of Non-Markovianity in Maximum State Entropy Exploration

In the maximum state entropy exploration framework, an agent interacts w...
research
06/01/2022

On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting

The availability of large pre-trained models is changing the landscape o...
research
03/08/2021

Behavior From the Void: Unsupervised Active Pre-Training

We introduce a new unsupervised pre-training method for reinforcement le...

Please sign up or login with your details

Forgot password? Click here to reset