DeepAI AI Chat
Log In Sign Up

Accelerating Reinforcement Learning for Reaching using Continuous Curriculum Learning

02/07/2020
by   Sha Luo, et al.
University of Groningen
0

Reinforcement learning has shown great promise in the training of robot behavior due to the sequential decision making characteristics. However, the required enormous amount of interactive and informative training data provides the major stumbling block for progress. In this study, we focus on accelerating reinforcement learning (RL) training and improving the performance of multi-goal reaching tasks. Specifically, we propose a precision-based continuous curriculum learning (PCCL) method in which the requirements are gradually adjusted during the training process, instead of fixing the parameter in a static schedule. To this end, we explore various continuous curriculum strategies for controlling a training process. This approach is tested using a Universal Robot 5e in both simulation and real-world multi-goal reach experiments. Experimental results support the hypothesis that a static training schedule is suboptimal, and using an appropriate decay function for curriculum learning provides superior results in a faster way.

READ FULL TEXT

page 1

page 7

10/31/2022

Reinforcement Learning for Solving Robotic Reaching Tasks in the Neurorobotics Platform

In recent years, reinforcement learning (RL) has shown great potential f...
04/25/2023

Proximal Curriculum for Reinforcement Learning Agents

We consider the problem of curriculum design for reinforcement learning ...
11/07/2021

Automatic Goal Generation using Dynamical Distance Learning

Reinforcement Learning (RL) agents can learn to solve complex sequential...
03/09/2023

GOATS: Goal Sampling Adaptation for Scooping with Curriculum Reinforcement Learning

In this work, we first formulate the problem of goal-conditioned robotic...
06/16/2021

TSO: Curriculum Generation using continuous optimization

The training of deep learning models poses vast challenges of including ...