A Proposal for Intelligent Agents with Episodic Memory

by   David Murphy, et al.

In the future we can expect that artificial intelligent agents, once deployed, will be required to learn continually from their experience during their operational lifetime. Such agents will also need to communicate with humans and other agents regarding the content of their experience, in the context of passing along their learnings, for the purpose of explaining their actions in specific circumstances or simply to relate more naturally to humans concerning experiences the agent acquires that are not necessarily related to their assigned tasks. We argue that to support these goals, an agent would benefit from an episodic memory; that is, a memory that encodes the agent's experience in such a way that the agent can relive the experience, communicate about it and use its past experience, inclusive of the agents own past actions, to learn more effective models and policies. In this short paper, we propose one potential approach to provide an AI agent with such capabilities. We draw upon the ever-growing body of work examining the function and operation of the Medial Temporal Lobe (MTL) in mammals to guide us in adding an episodic memory capability to an AI agent composed of artificial neural networks (ANNs). Based on that, we highlight important aspects to be considered in the memory organization and we propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories. Despite being initial work, we hope this short paper can spark discussions around the creation of intelligent agents with memory or, at least, provide a different point of view on the subject.


page 1

page 2

page 3

page 4


Designing a Safe Autonomous Artificial Intelligence Agent based on Human Self-Regulation

There is a growing focus on how to design safe artificial intelligent (A...

Thinking Fast and Slow in AI: the Role of Metacognition

AI systems have seen dramatic advancement in recent years, bringing many...

Towards a Unifying Model of Rationality in Multiagent Systems

Multiagent systems deployed in the real world need to cooperate with oth...

Guilty Artificial Minds

The concepts of blameworthiness and wrongness are of fundamental importa...

Toward Formalizing Teleportation of Pedagogical Artificial Agents

Our paradigm for the use of artificial agents to teach requires among ot...

Machine Ethics: The Creation of a Virtuous Machine

Artificial intelligent (AI) was initially developed as an implicit moral...

Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI

We argue that an explainable artificial intelligence must possess a rati...

Please sign up or login with your details

Forgot password? Click here to reset