Ontological Crises in Artificial Agents' Value Systems

05/19/2011
by   Peter de Blanc, et al.
0

Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

The Ontology for Agents, Systems and Integration of Services: OASIS version 2

Semantic representation is a key enabler for several application domains...
research
11/22/2022

OLGA : An Ontology and LSTM-based approach for generating Arithmetic Word Problems (AWPs) of transfer type

Machine generation of Arithmetic Word Problems (AWPs) is challenging as ...
research
06/30/2023

A behaviouristic approach to representing processes and procedures in the OASIS 2 ontology

Foundational ontologies devoted to the effective representation of proce...
research
11/12/2015

Software Agents with Concerns of their Own

We claim that it is possible to have artificial software agents for whic...
research
01/28/2017

Practical Reasoning with Norms for Autonomous Software Agents (Full Edition)

Autonomous software agents operating in dynamic environments need to con...
research
04/06/2023

Semantic Information in a model of Resource Gathering Agents

We explore the application of a new theory of Semantic Information to th...
research
08/05/2019

Corrigibility with Utility Preservation

Corrigibility is a safety property for artificially intelligent agents. ...

Please sign up or login with your details

Forgot password? Click here to reset