Reasoning about Unforeseen Possibilities During Policy Learning

by   Craig Innes, et al.

Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning. This is an unrealistic assumption in many scenarios, because new evidence can reveal important information about what is possible, possibilities that the agent was not aware existed prior to learning. We present a model of an agent which both discovers and learns to exploit unforeseen possibilities using two sources of evidence: direct interaction with the world and communication with a domain expert. We use a combination of probabilistic and symbolic reasoning to estimate all components of the decision problem, including its set of random variables and their causal dependencies. Agent simulations show that the agent converges on optimal polices even when it starts out unaware of factors that are critical to behaving optimally.


page 1

page 2

page 3

page 4


Learning Factored Markov Decision Processes with Unawareness

Methods for learning and planning in sequential decision problems often ...

MDPs with Unawareness

Markov decision processes (MDPs) are widely used for modeling decision-m...

Learning without Knowing: Unobserved Context in Continuous Transfer Reinforcement Learning

In this paper, we consider a transfer Reinforcement Learning (RL) proble...

Overabundant Information and Learning Traps

We develop a model of social learning from overabundant information: Age...

Off-Belief Learning

The standard problem setting in Dec-POMDPs is self-play, where the goal ...

Cluster-Based Social Reinforcement Learning

Social Reinforcement Learning methods, which model agents in large netwo...

Reasoning about Causality in Games

Causal reasoning and game-theoretic reasoning are fundamental topics in ...

Please sign up or login with your details

Forgot password? Click here to reset