Towards Human-Level Learning of Complex Physical Puzzles

by   Kei Ota, et al.

Humans quickly solve tasks in novel systems with complex dynamics, without requiring much interaction. While deep reinforcement learning algorithms have achieved tremendous success in many complex tasks, these algorithms need a large number of samples to learn meaningful policies. In this paper, we present a task for navigating a marble to the center of a circular maze. While this system is very intuitive and easy for humans to solve, it can be very difficult and inefficient for standard reinforcement learning algorithms to learn meaningful policies. We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system. Learning consists of initializing a physics engine with parameters estimated using data from the real system. The error in the physics engine is then corrected using Gaussian process regression, which is used to model the residual between real observations and physics engine simulations. The physics engine equipped with the residual model is then used to control the marble in the maze environment using a model-predictive feedback over a receding horizon. We contrast the learning behavior against the time taken by humans to solve the problem to show comparable behavior. To the best of our knowledge, this is the first time that a hybrid model consisting of a full physics engine along with a statistical function approximator has been used to control a complex physical system in real-time using nonlinear model-predictive control (NMPC). Codes for the simulation environment can be downloaded here . A video describing our method could be found here .


page 1

page 3

page 7


Can Reinforcement Learning for Continuous Control Generalize Across Physics Engines?

Reinforcement learning (RL) algorithms should learn as much as possible ...

Learning Manipulation under Physics Constraints with Visual Perception

Understanding physical phenomena is a key competence that enables humans...

Model-Free Episodic Control

State of the art deep reinforcement learning algorithms take many millio...

Intrinsic Motivation Driven Intuitive Physics Learning using Deep Reinforcement Learning with Intrinsic Reward Normalization

At an early age, human infants are able to learn and build a model of th...

Interactive Differentiable Simulation

Intelligent agents need a physical understanding of the world to predict...

A Hybrid Modelling Approach for Aerial Manipulators

Aerial manipulators (AM) exhibit particularly challenging, non-linear dy...

Computational simulation and the search for a quantitative description of simple reinforcement schedules

We aim to discuss schedules of reinforcement in its theoretical and prac...