High efficiency rl agent

08/30/2019
by   Jingbin Liu, et al.
0

Now a day, model free algorithm achieve state of art performance on many RL problems, but the low efficiency of model free algorithm limited the usage. We combine model base RL, soft actor-critic framework, and curiosity. proposed an agent called RMC, giving a promise way to achieve good performance while maintain data efficiency. We suppress the performance of SAC and achieve state of the art performance, both on efficiency and stability. Meanwhile we can solving POMDP problem and achieve great generalization from MDP to POMDP.

READ FULL TEXT
research
12/11/2020

OPAC: Opportunistic Actor-Critic

Actor-critic methods, a type of model-free reinforcement learning (RL), ...
research
11/26/2022

RL-Based Guidance in Outpatient Hysteroscopy Training: A Feasibility Study

This work presents an RL-based agent for outpatient hysteroscopy trainin...
research
10/02/2019

Improving Sample Efficiency in Model-Free Reinforcement Learning from Images

Training an agent to solve control tasks directly from high-dimensional ...
research
06/16/2021

Towards Automatic Actor-Critic Solutions to Continuous Control

Model-free off-policy actor-critic methods are an efficient solution to ...
research
04/28/2020

Sample-Efficient Model-based Actor-Critic for an Interactive Dialogue Task

Human-computer interactive systems that rely on machine learning are bec...
research
10/07/2019

Reinforcement Learning with Structured Hierarchical Grammar Representations of Actions

From a young age humans learn to use grammatical principles to hierarchi...
research
08/14/2020

Model-Free Optimal Control of Linear Multi-Agent Systems via Decomposition and Hierarchical Approximation

Designing the optimal linear quadratic regulator (LQR) for a large-scale...

Please sign up or login with your details

Forgot password? Click here to reset