Deep Interactive Bayesian Reinforcement Learning via Meta-Learning

by   Luisa Zintgraf, et al.
University of Oxford

Agents that interact with other agents often do not know a priori what the other agents' strategies are, but have to maximise their own online return while interacting with and learning about others. The optimal adaptive behaviour under uncertainty over the other agents' strategies w.r.t. some prior can in principle be computed using the Interactive Bayesian Reinforcement Learning framework. Unfortunately, doing so is intractable in most settings, and existing approximation methods are restricted to small tasks. To overcome this, we propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior. To model beliefs over other agents, we combine sequential and hierarchical Variational Auto-Encoders, and meta-train this inference model alongside the policy. We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.


Meta-Model-Based Meta-Policy Optimization

Model-based reinforcement learning (MBRL) has been applied to meta-learn...

VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning

Trading off exploration and exploitation in an unknown environment is ke...

Recasting Gradient-Based Meta-Learning as Hierarchical Bayes

Meta-learning allows an intelligent agent to leverage prior learning epi...

Meta-learning of Sequential Strategies

In this report we review memory-based meta-learning as a tool for buildi...

Meta-trained agents implement Bayes-optimal agents

Memory-based meta-learning is a powerful technique to build agents that ...

Model-Free Opponent Shaping

In general-sum games, the interaction of self-interested learning agents...

Beyond Bayes-optimality: meta-learning what you know you don't know

Meta-training agents with memory has been shown to culminate in Bayes-op...

Please sign up or login with your details

Forgot password? Click here to reset