Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion

06/04/2020
by   Josh Roy, et al.
12

We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task. WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective. WAPPO outperforms the prior state-of-the-art in visual transfer and successfully transfers policies across Visual Cartpole and two instantiations of 16 OpenAI Procgen environments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset