DeepAI AI Chat
Log In Sign Up

Memory-Based Meta-Learning on Non-Stationary Distributions

by   Tim Genewein, et al.

Memory-based meta-learning is a technique for approximating Bayes-optimal predictors. Under fairly general conditions, minimizing sequential prediction error, measured by the log loss, leads to implicit meta-learning. The goal of this work is to investigate how far this interpretation can be realized by current sequence prediction models and training regimes. The focus is on piecewise stationary sources with unobserved switching-points, which arguably capture an important characteristic of natural language and action-observation sequences in partially observable environments. We show that various types of memory-based neural models, including Transformers, LSTMs, and RNNs can learn to accurately approximate known Bayes-optimal algorithms and behave as if performing Bayesian inference over the latent switching-points and the latent parameters governing the data distribution within each segment.


page 1

page 2

page 3

page 4


Meta-trained agents implement Bayes-optimal agents

Memory-based meta-learning is a powerful technique to build agents that ...

Meta-learning of Sequential Strategies

In this report we review memory-based meta-learning as a tool for buildi...

Been There, Done That: Meta-Learning with Episodic Recall

Meta-learning agents excel at rapidly learning new tasks from open-ended...

Non-stationary Bandits and Meta-Learning with a Small Set of Optimal Arms

We study a sequential decision problem where the learner faces a sequenc...

Meta Particle Flow for Sequential Bayesian Inference

We present a particle flow realization of Bayes' rule, where an ODE-base...

Meta-Learned Models of Cognition

Meta-learning is a framework for learning learning algorithms through re...

Beyond Bayes-optimality: meta-learning what you know you don't know

Meta-training agents with memory has been shown to culminate in Bayes-op...