Toward Formalizing Teleportation of Pedagogical Artificial Agents

04/10/2018
by   John Angel, et al.
0

Our paradigm for the use of artificial agents to teach requires among other things that they persist through time in their interaction with human students, in such a way that they "teleport" or "migrate" from an embodiment at one time t to a different embodiment at later time t'. In this short paper, we report on initial steps toward the formalization of such teleportation, in order to enable an overseeing AI system to establish, mechanically, and verifiably, that the human students in question will likely believe that the very same artificial agent has persisted across such times despite the different embodiments.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/24/2021

Guilty Artificial Minds

The concepts of blameworthiness and wrongness are of fundamental importa...
04/30/2019

Coevo: a collaborative design platform with artificial agents

We present Coevo, an online platform that allows both humans and artific...
09/30/2019

Using Conversational Agents To Support Learning By Teaching

Conversational agents are becoming increasingly popular for supporting a...
02/02/2021

"Alexa, Can I Program You?": Student Perceptions of Conversational Artificial Intelligence Before and After Programming Alexa

Growing up in an artificial intelligence-filled world, with Siri and Ama...
03/05/2021

Causal Analysis of Agent Behavior for AI Safety

As machine learning systems become more powerful they also become increa...
05/07/2020

A Proposal for Intelligent Agents with Episodic Memory

In the future we can expect that artificial intelligent agents, once dep...
11/10/2017

Communicative Capital for Prosthetic Agents

This work presents an overarching perspective on the role that machine i...

1 Introduction

Our paradigm for the use of artificial agents to teach requires among other things that they persist through time in their interaction with human students, in such a way that they “teleport” or “migrate” from an embodiment at one time to a different embodiment at later time . In this short paper, we report on initial steps toward the formalization of such teleportation, in order to enable an overseeing AI system to establish, mechanically, and verifiably, that the human students in question will likely believe that the very same artificial agent has persisted across such times despite the different embodiments.

The plan for the sequel is straightforward, and as follows. After encapsulating our paradigm for the deployment of artificial agents in service of learning, and taking note of the fact that the “teleportation”/“migration” problem has hitherto been treated only informally, we then convey the kernel of our approach to formalizing agent teleportation between different embodiments, formalize this kernel to a degree, in order to produce an initial simulation, and then wrap up with some final remarks.

2 Our Paradigm & Teleportation

A crucial part of our novel paradigm for artificial agents that teach is the engineering of a class of AIs, crucially powered by cognitive logics, able to persist through days and weeks in their interaction with the humans whose education is to be thereby enhanced. The artificial agents in our paradigm are able to seamlessly “teleport” between heterogenous environments in which a human learner may find herself as time unfolds; this capacity is intended to provide a continuous educational experience to the human student, and offers the possibility of human-machine friendship.

In short, our agents need to be “teleportative.” This means that the agent should be able to be used in multiple hardware environments by a user, such that the user has the impression of an continuous, uninterrupted interaction with the very same agent. This helps to reinforce the possibility of a persistent, trusting relationship between human and machine.

3 Prior Accounts of Teleportation of Artificial Agents

There is some excellent and interesting prior work on teleporting artificial agents. Some explore how the consistency of a migrating agent’s memory effects a user’s perception of a continuous identity [1]. Others shed light on visual cues useful for convincing users of an agent’s teleportation [6]. In addition, excellent progress has been made towards the designing of migrating agents [5] and testing real-world implementations of such agents [2]. Unfortunately for our purposes, the prior art is informal. Our goal is to capture teleportation formally, and on the strength of that formalization to enable an overseeing AI system to prove, or minimally justify rigorously, that the teleportation in question is indeed believable.

4 The Kernel of the Formalization

In the longstanding quasi-technical literature on personal identity in philosophy there is a strong tradition of trying to work out a rigorous account of when person at (= ) is identical with person on the basis of shared memories between and .

The goal of our initial formalization is to build a system that can find a proof for when it believes that a student believes two embodied agents are the same . The system can conclude that the student believes two embodiments to be the same if the system can find a proof that it believes that the student believes that the two embodiments have a belief at specific times that cannot be believed by more than one agent. If the system fails to find such a proof or argument, then the system can take corrective actions to make it more explicit to the human that the embodiments are the same. Note that formalization requires the system to understand beliefs of agents which might themselves be about beliefs of other agents (and so on).

5 Initial Formalization and Simulation

The requirement that the system understand the student’s beliefs about other embodied agents’ beliefs implies that we need to have a sufficiently expressive system BDI logics (belief/desire/intentions) have a long tradition of being used to model such agents [7]. For our formalization, we use a system that is a proper superset111With respect to modal operators and inference schemata. of such logics.

We specifically use the formal system in [4]. The system is a modal extension of first-order logic, and has the following modal operators: , for belief, and for perception. The syntax and inference schemata of the system are shown below. is a meta-variable for formulae, and is any first-order atomic formula. Assume that we have at hand a first-order alphabet augmented with a finite set of constant symbols for agents and a countably infinite set of constant symbols for times . (Sometimes we use for and for below.) are first-order variables. The grammar is follows:

stands for agent at time believing and stands for agent at time perceiving . The base inference schemata are given here:

Now assume that there is a background set of axioms we are working with. We have available the basic theory of arithmetic (so we can assert statements of the form ). tells us how perceptions get translated into beliefs. is an inference schema that lets us model idealized agents that have their beliefs closed under the  proof theory. While normal humans are not deductively closed, this lets us model more closely how deliberate agents such as organizations and more strategic actors reason. Reasoning is performed through a novel first-order modal logic theorem prover, ShadowProver, which uses a technique called shadowing to achieve speed without sacrificing consistency in the system [3].222The prover is available in both Java and Common Lisp and can be obtained at: https://github.com/naveensundarg/prover. The underlying first-order prover is SNARK, available at: http://www.ai.sri.com/~stickel/snark.html.

[width=0.45]./simulation.pdf

Figure 1: Simulation

The simulation is set up as a reasoning problem from a set of given assumptions to a goal (see Figure 2). In the formalization shown below, the system believes that the student believes two embodiments to have the same identity if the embodiments at different times believe some personal object to have the same property (see assumption ). For instance, assume that the student’s watch is a personal object. At time , we have believing that the watch is stopped, and at time we also have believing the same. From these assumptions, the system can derive that the student believes that the embodiments are the same (see Figure 1 for an overview).

[width=0.80]./simulation.png

Figure 2: Simulation: ShadowProver goes through the below in seconds.

6 Concluding Remarks; Next Steps

We readily admit to having only taken initial steps toward the formalization of teleportation for artificial agents. The simulation we have presented does seem to indicate to us that things are scalable — but of course only time and experimentation will tell. Finally, it’s important to note that we haven’t herein sought to address the educational efficacy of our approach, nor the specific learning value of persistent teaching agents across embodiments.

References