DeepAI
Log In Sign Up

Knowledge Representation for Robots through Human-Robot Interaction

07/28/2013
by   Emanuele Bastianelli, et al.
0

The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing "knowledgeable" robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations.

READ FULL TEXT

page 2

page 7

page 10

06/06/2020

Investigating the Effect of Deictic Movements of a Multi-robot

Multi-robot systems are made up of a team of multiple robots, which prov...
11/21/2019

Verbal Programming of Robot Behavior

Home robots may come with many sophisticated built-in abilities, however...
08/26/2021

Improving HRI through robot architecture transparency

In recent years, an increased effort has been invested to improve the ca...
07/27/2022

Learning to Assess Danger from Movies for Cooperative Escape Planning in Hazardous Environments

There has been a plethora of work towards improving robot perception and...
11/06/2018

Evaluating Methods for End-User Creation of Robot Task Plans

How can we enable users to create effective, perception-driven task plan...
03/23/2021

Ground Truths for the Humanities

Ensuring a faithful interaction with data and its representation for hum...