Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning

04/03/2009
by   Petar Kormushev, et al.
0

A mechanism called Eligibility Propagation is proposed to speed up the Time Hopping technique used for faster Reinforcement Learning in simulations. Eligibility Propagation provides for Time Hopping similar abilities to what eligibility traces provide for conventional Reinforcement Learning. It propagates values from one state to all of its temporal predecessors using a state transitions graph. Experiments on a simulated biped crawling robot confirm that Eligibility Propagation accelerates the learning process more than 3 times.

READ FULL TEXT
research
04/03/2009

Time Hopping technique for faster reinforcement learning in simulations

This preprint has been withdrawn by the author for revision...
research
02/22/2017

Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

Error propagation is a common problem in NLP. Reinforcement learning exp...
research
04/18/2017

Investigating Recurrence and Eligibility Traces in Deep Q-Networks

Eligibility traces in reinforcement learning are used as a bias-variance...
research
05/03/2018

Learning Pretopological Spaces to Model Complex Propagation Phenomena: A Multiple Instance Learning Approach Based on a Logical Modeling

This paper addresses the problem of learning the concept of "propagation...
research
10/23/2018

Efficient Eligibility Traces for Deep Reinforcement Learning

Eligibility traces are an effective technique to accelerate reinforcemen...
research
04/10/2018

Personalization of Health Interventions using Cluster-Based Reinforcement Learning

Research has shown that personalization of health interventions can cont...
research
05/30/2019

Reinforcement Learning and Adaptive Sampling for Optimized DNN Compilation

Achieving faster execution with shorter compilation time can enable furt...

Please sign up or login with your details

Forgot password? Click here to reset