Understanding the origin of information-seeking exploration in probabilistic objectives for control

03/11/2021
by   Beren Millidge, et al.
0

The exploration-exploitation trade-off is central to the description of adaptive behaviour in fields ranging from machine learning, to biology, to economics. While many approaches have been taken, one approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive' which is often implemented in terms of maximizing the agents information gain about the world – an approach which has been widely studied in machine learning and cognitive science. In this paper we mathematically investigate the nature and meaning of such approaches and demonstrate that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives we call divergence objectives. We propose a dichotomy in the objective functions underlying adaptive behaviour between evidence objectives, which correspond to well-known reward or utility maximizing objectives in the literature, and divergence objectives which instead seek to minimize the divergence between the agent's expected and desired futures, and argue that this new class of divergence objectives could form the mathematical foundation for a much richer understanding of the exploratory components of adaptive and intelligent action, beyond simply greedy utility maximization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2020

Whence the Expected Free Energy?

The Expected Free Energy (EFE) is a central quantity in the theory of ac...
research
09/03/2020

Action and Perception as Divergence Minimization

We introduce a unified objective for action and perception of intelligen...
research
03/18/2018

Adaptive prior probabilities via optimization of risk and entropy

An agent choosing between various actions tends to take the one with the...
research
05/01/2022

Can Information Behaviour Inform Machine Learning?

The objective of this paper is to explore the opportunities for human in...
research
05/27/2019

Private Learning and Regularized Optimal Transport

Private data are valuable either by remaining private (for instance if t...
research
12/31/2018

Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)

Utility functions or their equivalents (value functions, objective funct...

Please sign up or login with your details

Forgot password? Click here to reset