Towards Transparency of TD-RL Robotic Systems with a Human Teacher

05/12/2020 ∙ by Marco Matarese, et al. ∙ 0

The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learning (ML) mechanisms in the robot control. Indeed, the use of ML techniques, such as Reinforcement Learning (RL), makes the robot behaviour, during the learning process, not transparent to the observing user. In this work, we proposed an emotional model to improve the transparency in RL tasks for human-robot collaborative scenarios. The architecture we propose supports the RL algorithm with an emotional model able to both receive human feedback and exhibit emotional responses based on the learning process. The model is entirely based on the Temporal Difference (TD) error. The architecture was tested in an isolated laboratory with a simple setup. The results highlight that showing its internal state through an emotional response is enough to make a robot transparent to its human teacher. People also prefer to interact with a responsive robot because they are used to understand their intentions via emotions and social signals.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.