Our paradigm for the use of artificial agents to teach requires among other things that they persist through time in their interaction with human students, in such a way that they “teleport” or “migrate” from an embodiment at one time to a different embodiment at later time . In this short paper, we report on initial steps toward the formalization of such teleportation, in order to enable an overseeing AI system to establish, mechanically, and verifiably, that the human students in question will likely believe that the very same artificial agent has persisted across such times despite the different embodiments.
The plan for the sequel is straightforward, and as follows. After encapsulating our paradigm for the deployment of artificial agents in service of learning, and taking note of the fact that the “teleportation”/“migration” problem has hitherto been treated only informally, we then convey the kernel of our approach to formalizing agent teleportation between different embodiments, formalize this kernel to a degree, in order to produce an initial simulation, and then wrap up with some final remarks.
2 Our Paradigm & Teleportation
A crucial part of our novel paradigm for artificial agents that teach is the engineering of a class of AIs, crucially powered by cognitive logics, able to persist through days and weeks in their interaction with the humans whose education is to be thereby enhanced. The artificial agents in our paradigm are able to seamlessly “teleport” between heterogenous environments in which a human learner may find herself as time unfolds; this capacity is intended to provide a continuous educational experience to the human student, and offers the possibility of human-machine friendship.
In short, our agents need to be “teleportative.” This means that the agent should be able to be used in multiple hardware environments by a user, such that the user has the impression of an continuous, uninterrupted interaction with the very same agent. This helps to reinforce the possibility of a persistent, trusting relationship between human and machine.
3 Prior Accounts of Teleportation of Artificial Agents
There is some excellent and interesting prior work on teleporting artificial agents. Some explore how the consistency of a migrating agent’s memory effects a user’s perception of a continuous identity . Others shed light on visual cues useful for convincing users of an agent’s teleportation . In addition, excellent progress has been made towards the designing of migrating agents  and testing real-world implementations of such agents . Unfortunately for our purposes, the prior art is informal. Our goal is to capture teleportation formally, and on the strength of that formalization to enable an overseeing AI system to prove, or minimally justify rigorously, that the teleportation in question is indeed believable.
4 The Kernel of the Formalization
In the longstanding quasi-technical literature on personal identity in philosophy there is a strong tradition of trying to work out a rigorous account of when person at (= ) is identical with person on the basis of shared memories between and .
The goal of our initial formalization is to build a system that can find a proof for when it believes that a student believes two embodied agents are the same . The system can conclude that the student believes two embodiments to be the same if the system can find a proof that it believes that the student believes that the two embodiments have a belief at specific times that cannot be believed by more than one agent. If the system fails to find such a proof or argument, then the system can take corrective actions to make it more explicit to the human that the embodiments are the same. Note that formalization requires the system to understand beliefs of agents which might themselves be about beliefs of other agents (and so on).
5 Initial Formalization and Simulation
The requirement that the system understand the student’s beliefs about other embodied agents’ beliefs implies that we need to have a sufficiently expressive system BDI logics (belief/desire/intentions) have a long tradition of being used to model such agents . For our formalization, we use a system that is a proper superset111With respect to modal operators and inference schemata. of such logics.
We specifically use the formal system in . The system is a modal extension of first-order logic, and has the following modal operators: , for belief, and for perception. The syntax and inference schemata of the system are shown below. is a meta-variable for formulae, and is any first-order atomic formula. Assume that we have at hand a first-order alphabet augmented with a finite set of constant symbols for agents and a countably infinite set of constant symbols for times . (Sometimes we use for and for below.) are first-order variables. The grammar is follows:
stands for agent at time believing and stands for agent at time perceiving . The base inference schemata are given here:
Now assume that there is a background set of axioms we are working with. We have available the basic theory of arithmetic (so we can assert statements of the form ). tells us how perceptions get translated into beliefs. is an inference schema that lets us model idealized agents that have their beliefs closed under the proof theory. While normal humans are not deductively closed, this lets us model more closely how deliberate agents such as organizations and more strategic actors reason. Reasoning is performed through a novel first-order modal logic theorem prover, ShadowProver, which uses a technique called shadowing to achieve speed without sacrificing consistency in the system .222The prover is available in both Java and Common Lisp and can be obtained at: https://github.com/naveensundarg/prover. The underlying first-order prover is SNARK, available at: http://www.ai.sri.com/~stickel/snark.html.
The simulation is set up as a reasoning problem from a set of given assumptions to a goal (see Figure 2). In the formalization shown below, the system believes that the student believes two embodiments to have the same identity if the embodiments at different times believe some personal object to have the same property (see assumption ). For instance, assume that the student’s watch is a personal object. At time , we have believing that the watch is stopped, and at time we also have believing the same. From these assumptions, the system can derive that the student believes that the embodiments are the same (see Figure 1 for an overview).
6 Concluding Remarks; Next Steps
We readily admit to having only taken initial steps toward the formalization of teleportation for artificial agents. The simulation we have presented does seem to indicate to us that things are scalable — but of course only time and experimentation will tell. Finally, it’s important to note that we haven’t herein sought to address the educational efficacy of our approach, nor the specific learning value of persistent teaching agents across embodiments.
-  Aylett, R., Kriegel, M., Wallace, I., Segura, E.M., Mecurio, J., Nylander, S., Vargas, P.: Do I Remember You? Memory and Identity in Multiple Embodiments. In: Proceedings of The 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2013). IEEE (2013), DOI: 10.1109/ROMAN.2013.6628435. The conference was located in Gyeongju, South Korea.
-  Gomes, P.F., Segura, E.M., Cramer, H., Paiva, T., Paiva, A., Holmquist, L.E.: ViPleo and PhyPleo: Artificial Pet with Two Embodiments. Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology pp. 3:1–3:8 (2011), http://doi.acm.org/10.1145/2071423.2071427
Govindarajulu, N.S., Bringsjord, S.: On Automating the Doctrine of Double Effect. In: Sierra, C. (ed.) Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. pp. 4722–4730. Melbourne, Australia (2017),https://doi.org/10.24963/ijcai.2017/658, preprint available at this url: https://arxiv.org/abs/1703.08922
-  Govindarajulu, N.S., Bringsjord, S.: Strength Factors: An Uncertainty System for a Quantified Modal Logic (2017), https://arxiv.org/abs/1705.10726, presented at Workshop on Logical Foundations for Uncertainty and Machine Learning at IJCAI 2017, Melbourne, Australia
-  Hassani, K., Lee, W.S.: On designing migrating agents. SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence on - SIGGRAPH ASIA ’14 pp. 1–10 (2014), http://dl.acm.org/citation.cfm?doid=2668956.2668963
-  Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: A User study on visualization of agent migration between two companion robots. HCII ’09: Proceedings of the 13th International Conference on Human-Computer Interaction (2009), http://uhra.herts.ac.uk/handle/2299/3977
-  Wooldridge, M.: An Introduction to Multi Agent Systems. MIT Press, Cambridge MA (2002)