Learning to Play Table Tennis From Scratch using Muscular Robots

06/10/2020
by   Dieter Büchler, et al.
0

Dynamic tasks like table tennis are relatively easy to learn for humans but pose significant challenges to robots. Such tasks require accurate control of fast movements and precise timing in the presence of imprecise state estimation of the flying ball and the robot. Reinforcement Learning (RL) has shown promise in learning of complex control tasks from data. However, applying step-based RL to dynamic tasks on real systems is safety-critical as RL requires exploring and failing safely for millions of time steps in high-speed regimes. In this paper, we demonstrate that safe learning of table tennis using model-free Reinforcement Learning can be achieved by using robot arms driven by pneumatic artificial muscles (PAMs). Softness and back-drivability properties of PAMs prevent the system from leaving the safe region of its state space. In this manner, RL empowers the robot to return and smash real balls with 5 m and 12m on average to a desired landing point. Our setup allows the agent to learn this safety-critical task (i) without safety constraints in the algorithm, (ii) while maximizing the speed of returned balls directly in the reward function (iii) using a stochastic policy that acts directly on the low-level controls of the real system and (iv) trains for thousands of trials (v) from scratch without any prior knowledge. Additionally, we present HYSR, a practical hybrid sim and real training that avoids playing real balls during training by randomly replaying recorded ball trajectories in simulation and applying actions to the real robot. This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls. Videos and datasets are available at muscularTT.embodied.ml.

READ FULL TEXT

page 1

page 8

page 11

research
11/17/2020

Reachability-based Trajectory Safeguard (RTS): A Safe and Fast Reinforcement Learning Safety Layer for Continuous Control

Reinforcement Learning (RL) algorithms have achieved remarkable performa...
research
09/16/2023

Stylized Table Tennis Robots Skill Learning with Incomplete Human Demonstrations

In recent years, Reinforcement Learning (RL) is becoming a popular techn...
research
07/05/2018

Optimizing Execution of Dynamic Goal-Directed Robot Movements with Learning Control

Highly dynamic tasks that require large accelerations and precise tracki...
research
11/06/2020

Sample-efficient Reinforcement Learning in Robotic Table Tennis

Reinforcement learning (RL) has recently shown impressive success in var...
research
12/06/2022

Safe Inverse Reinforcement Learning via Control Barrier Function

Learning from Demonstration (LfD) is a powerful method for enabling robo...
research
07/13/2021

Efficient and Reactive Planning for High Speed Robot Air Hockey

Highly dynamic robotic tasks require high-speed and reactive robots. The...
research
12/12/2022

Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks

Safety comes first in many real-world applications involving autonomous ...

Please sign up or login with your details

Forgot password? Click here to reset