DeepAI AI Chat
Log In Sign Up

A note on reinforcement learning with Wasserstein distance regularisation, with applications to multipolicy learning

by   Mohammed Amin Abdullah, et al.
HUAWEI Technologies Co., Ltd.
berkeley college

In this note we describe an application of Wasserstein distance to Reinforcement Learning. The Wasserstein distance in question is between the distribution of mappings of trajectories of a policy into some metric space, and some other fixed distribution (which may, for example, come from another policy). Different policies induce different distributions, so given an underlying metric, the Wasserstein distance quantifies how different policies are. This can be used to learn multiple polices which are different in terms of such Wasserstein distances by using a Wasserstein regulariser. Changing the sign of the regularisation parameter, one can learn a policy for which its trajectory mapping distribution is attracted to a given fixed distribution.


page 1

page 2

page 3

page 4


Minimax Distribution Estimation in Wasserstein Distance

The Wasserstein metric is an important measure of distance between proba...

Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion

We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO...

On Wasserstein Reinforcement Learning and the Fokker-Planck equation

Policy gradients methods often achieve better performance when the chang...

Wasserstein Neural Processes

Neural Processes (NPs) are a class of models that learn a mapping from a...

The Spectral-Domain 𝒲_2 Wasserstein Distance for Elliptical Processes and the Spectral-Domain Gelbrich Bound

In this short note, we introduce the spectral-domain 𝒲_2 Wasserstein dis...

Wasserstein t-SNE

Scientific datasets often have hierarchical structure: for example, in s...

Multiagent Reinforcement Learning for Autonomous Routing and Pickup Problem with Adaptation to Variable Demand

We derive a learning framework to generate routing/pickup policies for a...