-
Learning Task Agnostic Sufficiently Accurate Models
For complex real-world systems, designing controllers are a difficult ta...
read it
-
Undefined-behavior guarantee by switching to model-based controller according to the embedded dynamics in Recurrent Neural Network
For robotic applications, its task performance and operation must be gua...
read it
-
Synthesizing Neural Network Controllers with Probabilistic Model based Reinforcement Learning
We present an algorithm for rapidly learning controllers for robotics sy...
read it
-
Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
We present a hierarchical framework that combines model-based control an...
read it
-
Control of Stochastic Quantum Dynamics with Differentiable Programming
Controlling stochastic dynamics of a quantum system is an indispensable ...
read it
-
Sustained sensorimotor control as intermittent decisions about prediction errors: Computational framework and application to ground vehicle steering
A conceptual and computational framework is proposed for modelling of hu...
read it
-
Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Combining model-based and model-free learning systems has been shown to ...
read it
Leveraging Forward Model Prediction Error for Learning Control
Learning for model based control can be sample-efficient and generalize well, however successfully learning models and controllers that represent the problem at hand can be challenging for complex tasks. Using inaccurate models for learning can lead to sub-optimal solutions, that are unlikely to perform well in practice. In this work, we present a learning approach which iterates between model learning and data collection and leverages forward model prediction error for learning control. We show how using the controller's prediction as input to a forward model can create a differentiable connection between the controller and the model, allowing us to formulate a loss in the state space. This lets us include forward model prediction error during controller learning and we show that this creates a loss objective that significantly improves learning on different motor control tasks. We provide empirical and theoretical results that show the benefits of our method and present evaluations in simulation for learning control on a 7 DoF manipulator and an underactuated 12 DoF quadruped. We show that our approach successfully learns controllers for challenging motor control tasks involving contact switching.
READ FULL TEXT
Comments
There are no comments yet.