-
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
Reinforcement learning can acquire complex behaviors from high-level spe...
read it
-
Automata Guided Reinforcement Learning With Demonstrations
Tasks with complex temporal structures and long horizons pose a challeng...
read it
-
Learning State-Dependent Losses for Inverse Dynamics Learning
Being able to quickly adapt to changes in dynamics is paramount in model...
read it
-
Multi-task Maximum Entropy Inverse Reinforcement Learning
Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferr...
read it
-
Multiagent-based Participatory Urban Simulation through Inverse Reinforcement Learning
The multiagent-based participatory simulation features prominently in ur...
read it
-
Inverse Reinforcement Learning via Deep Gaussian Process
We propose a new approach to inverse reinforcement learning (IRL) based ...
read it
-
Curious iLQR: Resolving Uncertainty in Model-based RL
Curiosity as a means to explore during reinforcement learning problems h...
read it
Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.
READ FULL TEXT
Comments
There are no comments yet.