A Review on Learning Planning Action Models for Socio-Communicative HRI

10/22/2018
by   Ankuj Arora, et al.
0

For social robots to be brought more into widespread use in the fields of companionship, care taking and domestic help, they must be capable of demonstrating social intelligence. In order to be acceptable, they must exhibit socio-communicative skills. Classic approaches to program HRI from observed human-human interactions fails to capture the subtlety of multimodal interactions as well as the key structural differences between robots and humans. The former arises due to a difficulty in quantifying and coding multimodal behaviours, while the latter due to a difference of the degrees of liberty between a robot and a human. However, the notion of reverse engineering from multimodal HRI traces to learn the underlying behavioral blueprint of the robot given multimodal traces seems an option worth exploring. With this spirit, the entire HRI can be seen as a sequence of exchanges of speech acts between the robot and human, each act treated as an action, bearing in mind that the entire sequence is goal-driven. Thus, this entire interaction can be treated as a sequence of actions propelling the interaction from its initial to goal state, also known as a plan in the domain of AI planning. In the same domain, this action sequence that stems from plan execution can be represented as a trace. AI techniques, such as machine learning, can be used to learn behavioral models (also known as symbolic action models in AI), intended to be reusable for AI planning, from the aforementioned multimodal traces. This article reviews recent machine learning techniques for learning planning action models which can be applied to the field of HRI with the intent of rendering robots as socio-communicative.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2020

AIR-Act2Act: Human-human interaction dataset for teaching non-verbal social behaviors to robots

To better interact with users, a social robot should understand the user...
research
02/24/2017

Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning

For robots to coexist with humans in a social world like ours, it is cru...
research
11/04/2014

Learning of Agent Capability Models with Applications in Multi-agent Planning

One important challenge for a set of agents to achieve more efficient co...
research
08/26/2019

Learning Action Models from Disordered and Noisy Plan Traces

There is increasing awareness in the planning community that the burden ...
research
03/14/2023

Chat with the Environment: Interactive Multimodal Perception using Large Language Models

Programming robot behaviour in a complex world faces challenges on multi...
research
11/26/2020

AMLSI: A Novel Accurate Action Model Learning Algorithm

This paper presents new approach based on grammar induction called AMLSI...

Please sign up or login with your details

Forgot password? Click here to reset