End-Effect Exploration Drive for Effective Motor Learning

06/29/2020
by   Emmanuel Daucé, et al.
0

End-effect drives are proposed here as an effective way to implement goal-directed motor learning, in the absence of an explicit forward model. An end-effect model relies on a simple statistical recording of the effect of the current policy, here used as a substitute for the more resource-demanding forward models. When combined with a reward structure, it forms the core of a lightweight variational free energy minimization setup. The main difficulty lies in the maintenance of this simplified effect model together with the online update of the policy. When the prior target distribution is uniform, it provides a ways to learn an efficient exploration policy, consistently with the intrinsic curiosity principles. When combined with an extrinsic reward, our approach is finally shown to provide a faster training than traditional off-policy techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2023

Curiosity-Driven Reinforcement Learning based Low-Level Flight Control

Curiosity is one of the main motives in many of the natural creatures wi...
research
06/18/2019

Directed Exploration for Reinforcement Learning

Efficient exploration is necessary to achieve good sample efficiency for...
research
07/09/2020

A Policy Gradient Method for Task-Agnostic Exploration

In a reward-free environment, what is a suitable intrinsic objective for...
research
11/10/2020

Perturbation-based exploration methods in deep reinforcement learning

Recent research on structured exploration placed emphasis on identifying...
research
05/31/2023

Latent Exploration for Reinforcement Learning

In Reinforcement Learning, agents learn policies by exploring and intera...
research
05/29/2019

Learning Navigation Subroutines by Watching Videos

Hierarchies are an effective way to boost sample efficiency in reinforce...

Please sign up or login with your details

Forgot password? Click here to reset