FORK: A Forward-Looking Actor For Model-Free Reinforcement Learning

10/04/2020
by   Honghao Wei, et al.
0

In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free Actor-Critic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performance improvement FORK can bring to the state-of-the-art algorithms. A variation of FORK can further solve Bipedal-WalkerHardcore in as few as four hours using a single GPU.

READ FULL TEXT

page 6

page 7

page 16

page 18

research
03/11/2019

Sample-Efficient Model-Free Reinforcement Learning with Off-Policy Critics

Value-based reinforcement-learning algorithms are currently state-of-the...
research
04/04/2020

Model-based actor-critic: GAN + DRL (actor-critic) => AGI

Our effort is toward unifying GAN and DRL algorithms into a unifying AI ...
research
06/11/2023

PACER: A Fully Push-forward-based Distributional Reinforcement Learning Algorithm

In this paper, we propose the first fully push-forward-based Distributio...
research
03/11/2021

A Quadratic Actor Network for Model-Free Reinforcement Learning

In this work we discuss the incorporation of quadratic neurons into poli...
research
05/08/2021

Generative Actor-Critic: An Off-policy Algorithm Using the Push-forward Model

Model-free deep reinforcement learning has achieved great success in man...
research
04/25/2023

Fulfilling Formal Specifications ASAP by Model-free Reinforcement Learning

We propose a model-free reinforcement learning solution, namely the ASAP...
research
11/26/2019

The problem with DDPG: understanding failures in deterministic environments with sparse rewards

In environments with continuous state and action spaces, state-of-the-ar...

Please sign up or login with your details

Forgot password? Click here to reset