Learning to Understand by Evolving Theories

07/27/2013
by   Martin E. Mueller, et al.
0

In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2014

On Action Theory Change

As historically acknowledged in the Reasoning about Actions and Change c...
research
03/06/2013

Representing and Reasoning With Probabilistic Knowledge: A Bayesian Approach

PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for aut...
research
06/24/2011

Compiling Causal Theories to Successor State Axioms and STRIPS-Like Systems

We describe a system for specifying the effects of actions. Unlike those...
research
06/03/2002

Handling Defeasibilities in Action Domains

Representing defeasibility is an important issue in common sense reasoni...
research
12/04/2015

Learning the Semantics of Manipulation Action

In this paper we present a formal computational framework for modeling m...
research
09/12/2018

Action Representations in Robotics: A Taxonomy and Systematic Classification

Understanding and defining the meaning of "action" is substantial for ro...
research
09/21/2022

ECSAS: Exploring Critical Scenarios from Action Sequence in Autonomous Driving

Critical scenario generation requires the ability of sampling critical c...

Please sign up or login with your details

Forgot password? Click here to reset