Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot

03/07/2006
by   Viktor Zhumatiy, et al.
0

We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.

READ FULL TEXT

page 12

page 14

research
12/19/2019

Deep Reinforcement Learning for Motion Planning of Mobile Robots

This paper presents a novel motion and trajectory planning algorithm for...
research
10/17/2019

Adaptive Discretization for Episodic Reinforcement Learning in Metric Spaces

We present an efficient algorithm for model-free episodic reinforcement ...
research
04/25/2021

Verbalization: Narration of Autonomous Robot Experience.

Autonomous mobile robots navigate in our spaces by planning and executin...
research
12/06/2021

Reinforcement Learning for Navigation of Mobile Robot with LiDAR

This paper presents a technique for navigation of mobile robot with Deep...
research
03/27/2019

Symbolic Regression for Constructing Analytic Models in Reinforcement Learning

Reinforcement learning (RL) is a widely used approach for controlling sy...
research
06/02/2000

Novelty Detection for Robot Neotaxis

The ability of a robot to detect and respond to changes in its environme...

Please sign up or login with your details

Forgot password? Click here to reset