RecallM: An Architecture for Temporal Context Understanding and Question Answering

07/06/2023
by   Brandon Kynoch, et al.
0

The ideal long-term memory mechanism for Large Language Model (LLM) based chatbots, would lay the foundation for continual learning, complex reasoning and allow sequential and temporal dependencies to be learnt. Creating this type of memory mechanism is an extremely challenging problem. In this paper we explore different methods of achieving the effect of long-term memory. We propose a new architecture focused on creating adaptable and updatable long-term memory for AGI systems. We demonstrate through various experiments the benefits of the RecallM architecture, particularly the improved temporal understanding of knowledge it provides.

READ FULL TEXT

page 3

page 4

research
10/15/2014

Memory Networks

We describe a new class of learning models called memory networks. Memor...
research
07/31/2023

MovieChat: From Dense Token to Sparse Memory for Long Video Understanding

Recently, integrating video foundation models and large language models ...
research
01/07/2023

A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering

Reasoning and question answering as a basic cognitive function for human...
research
10/20/2016

A Growing Long-term Episodic & Semantic Memory

The long-term memory of most connectionist systems lies entirely in the ...
research
07/22/2012

A New Training Algorithm for Kanerva's Sparse Distributed Memory

The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) ...
research
06/06/2021

Technical Report: Temporal Aggregate Representations

This technical report extends our work presented in [9] with more experi...
research
12/18/2014

Learning Temporal Dependencies in Data Using a DBN-BLSTM

Since the advent of deep learning, it has been used to solve various pro...

Please sign up or login with your details

Forgot password? Click here to reset