Model-Based Policy Search Using Monte Carlo Gradient Estimation with Real Systems Application

by   Fabio Amadio, et al.

In this paper, we present a Model-Based Reinforcement Learning algorithm named Monte Carlo Probabilistic Inference for Learning COntrol (MC-PILCO). The algorithm relies on Gaussian Processes (GPs) to model the system dynamics and on a Monte Carlo approach to estimate the policy gradient. This defines a framework in which we ablate the choice of the following components: (i) the selection of the cost function, (ii) the optimization of policies using dropout, (iii) an improved data efficiency through the use of structured kernels in the GP models. The combination of the aforementioned aspects affects dramatically the performance of MC-PILCO. Numerical comparisons in a simulated cart-pole environment show that MC-PILCO exhibits better data-efficiency and control performance w.r.t. state-of-the-art GP-based MBRL algorithms. Finally, we apply MC-PILCO to real systems, considering in particular systems with partially measurable states. We discuss the importance of modeling both the measurement system and the state estimators during policy optimization. The effectiveness of the proposed solutions has been tested in simulation and in two real systems, a Furuta pendulum and a ball-and-plate.


page 1

page 8

page 9

page 10

page 14


Model-based Policy Search for Partially Measurable Systems

In this paper, we propose a Model-Based Reinforcement Learning (MBRL) al...

Indexability and Rollout Policy for Multi-State Partially Observable Restless Bandits

Restless multi-armed bandits with partially observable states has applic...

Improving the scalabiliy of neutron cross-section lookup codes on multicore NUMA system

We use the XSBench proxy application, a memory-intensive OpenMP program,...

FORESEE: Model-based Reinforcement Learning using Unscented Transform with application to Tuning of Control Barrier Functions

In this paper, we introduce a novel online model-based reinforcement lea...

Structured Monte Carlo Sampling for Nonisotropic Distributions via Determinantal Point Processes

We propose a new class of structured methods for Monte Carlo (MC) sampli...

Approximating Gaussian Process Emulators with Linear Inequality Constraints and Noisy Observations via MC and MCMC

Adding inequality constraints (e.g. boundedness, monotonicity, convexity...

Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation

Sequence generation models are commonly refined with reinforcement learn...