Towards Human-Level Learning of Complex Physical Puzzles

11/14/2020
by   Kei Ota, et al.
13

Humans quickly solve tasks in novel systems with complex dynamics, without requiring much interaction. While deep reinforcement learning algorithms have achieved tremendous success in many complex tasks, these algorithms need a large number of samples to learn meaningful policies. In this paper, we present a task for navigating a marble to the center of a circular maze. While this system is very intuitive and easy for humans to solve, it can be very difficult and inefficient for standard reinforcement learning algorithms to learn meaningful policies. We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system. Learning consists of initializing a physics engine with parameters estimated using data from the real system. The error in the physics engine is then corrected using Gaussian process regression, which is used to model the residual between real observations and physics engine simulations. The physics engine equipped with the residual model is then used to control the marble in the maze environment using a model-predictive feedback over a receding horizon. We contrast the learning behavior against the time taken by humans to solve the problem to show comparable behavior. To the best of our knowledge, this is the first time that a hybrid model consisting of a full physics engine along with a statistical function approximator has been used to control a complex physical system in real-time using nonlinear model-predictive control (NMPC). Codes for the simulation environment can be downloaded here https://www.merl.com/research/license/CME . A video describing our method could be found here https://youtu.be/xaxNCXBovpc .

READ FULL TEXT

page 1

page 3

page 7

research
10/27/2020

Can Reinforcement Learning for Continuous Control Generalize Across Physics Engines?

Reinforcement learning (RL) algorithms should learn as much as possible ...
research
04/19/2019

Learning Manipulation under Physics Constraints with Visual Perception

Understanding physical phenomena is a key competence that enables humans...
research
06/14/2016

Model-Free Episodic Control

State of the art deep reinforcement learning algorithms take many millio...
research
03/16/2023

Residual Physics Learning and System Identification for Sim-to-real Transfer of Policies on Buoyancy Assisted Legged Robots

The light and soft characteristics of Buoyancy Assisted Lightweight Legg...
research
07/06/2019

Intrinsic Motivation Driven Intuitive Physics Learning using Deep Reinforcement Learning with Intrinsic Reward Normalization

At an early age, human infants are able to learn and build a model of th...
research
06/17/2022

A Hybrid Modelling Approach for Aerial Manipulators

Aerial manipulators (AM) exhibit particularly challenging, non-linear dy...
research
11/27/2021

Computational simulation and the search for a quantitative description of simple reinforcement schedules

We aim to discuss schedules of reinforcement in its theoretical and prac...

Please sign up or login with your details

Forgot password? Click here to reset