Exploration and preference satisfaction trade-off in reward-free learning

06/08/2021
by   Noor Sajid, et al.
6

Biological agents have meaningful interactions with their environment despite the absence of a reward signal. In such instances, the agent can learn preferred modes of behaviour that lead to predictable states – necessary for survival. In this paper, we pursue the notion that this learnt behaviour can be a consequence of reward-free preference learning that ensures an appropriate trade-off between exploration and preference satisfaction. For this, we introduce a model-based Bayesian agent equipped with a preference learning mechanism (pepper) using conjugate priors. These conjugate priors are used to augment the expected free energy planner for learning preferences over states (or outcomes) across time. Importantly, our approach enables the agent to learn preferences that encourage adaptive behaviour at test time. We illustrate this in the OpenAI Gym FrozenLake and the 3D mini-world environments – with and without volatility. Given a constant environment, these agents learn confident (i.e., precise) preferences and act to satisfy them. Conversely, in a volatile setting, perpetual preference uncertainty maintains exploratory behaviour. Our experiments suggest that learnable (reward-free) preferences entail a trade-off between exploration and preference satisfaction. Pepper offers a straightforward framework suitable for designing adaptive agents when reward functions cannot be predefined as in real environments.

READ FULL TEXT

page 7

page 17

page 18

page 19

page 20

page 21

page 22

research
07/25/2022

Modelling non-reinforced preferences using selective attention

How can artificial agents learn non-reinforced preferences to continuous...
research
10/17/2022

Symbol Guided Hindsight Priors for Reward Learning from Human Preferences

Specifying rewards for reinforcement learned (RL) agents is challenging....
research
11/25/2020

Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning

In this paper we consider multi-objective reinforcement learning where t...
research
05/26/2023

Learning Interpretable Models of Aircraft Handling Behaviour by Reinforcement Learning from Human Feedback

We propose a method to capture the handling abilities of fast jet pilots...
research
09/06/2023

Everyone Deserves A Reward: Learning Customized Human Preferences

Reward models (RMs) are crucial in aligning large language models (LLMs)...
research
06/25/2020

A mechanism to promote social behaviour in household load balancing

Reducing the peak energy consumption of households is essential for the ...
research
03/21/2019

Learning Personalized Thermal Preferences via Bayesian Active Learning with Unimodality Constraints

Thermal preferences vary from person to person and may change over time....

Please sign up or login with your details

Forgot password? Click here to reset