-
Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion
Integrating model-free and model-based approaches in reinforcement learn...
read it
-
Local Search for Policy Iteration in Continuous Control
We present an algorithm for local, regularized, policy improvement in re...
read it
-
On the model-based stochastic value gradient for continuous reinforcement learning
Model-based reinforcement learning approaches add explicit domain knowle...
read it
-
Efficient and Robust Reinforcement Learning with Uncertainty-based Value Expansion
By integrating dynamics models into model-free reinforcement learning (R...
read it
-
Asynchronous Methods for Model-Based Reinforcement Learning
Significant progress has been made in the area of model-based reinforcem...
read it
-
Combining Q-Learning and Search with Amortized Value Estimates
We introduce "Search with Amortized Value Estimates" (SAVE), an approach...
read it
-
Model-based controlled learning of MDP policies with an application to lost-sales inventory control
Recent literature established that neural networks can represent good MD...
read it
Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
Using a high Update-To-Data (UTD) ratio, model-based methods have recently achieved much higher sample efficiency than previous model-free methods for continuous-action DRL benchmarks. In this paper, we introduce a simple model-free algorithm, Randomized Ensembled Double Q-Learning (REDQ), and show that its performance is just as good as, if not better than, a state-of-the-art model-based algorithm for the MuJoCo benchmark. Moreover, REDQ can achieve this performance using fewer parameters than the model-based method, and with less wall-clock run time. REDQ has three carefully integrated ingredients which allow it to achieve its high performance: (i) a UTD ratio >> 1; (ii) an ensemble of Q functions; (iii) in-target minimization across a random subset of Q functions from the ensemble. Through carefully designed experiments, we provide a detailed analysis of REDQ and related model-free algorithms. To our knowledge, REDQ is the first successful model-free DRL algorithm for continuous-action spaces using a UTD ratio >> 1.
READ FULL TEXT
Comments
There are no comments yet.