Dueling Posterior Sampling for Preference-Based Reinforcement Learning

08/04/2019
by   Ellen R. Novoseller, et al.
12

In preference-based reinforcement learning (RL), an agent interacts with the environment while receiving preferences instead of absolute feedback. While there is increasing research activity in preference-based RL, the design of formal frameworks that admit tractable theoretical analysis remains an open challenge. Building upon ideas from preference-based bandit learning and posterior sampling in RL, we present Dueling Posterior Sampling (DPS), which employs preference-based posterior sampling to learn both the system dynamics and the underlying utility function that governs the user's preferences. Because preference feedback is provided on trajectories rather than individual state/action pairs, we develop a Bayesian approach to solving the credit assignment problem, translating user preferences to a posterior distribution over state/action reward models. We prove an asymptotic no-regret rate for DPS with a Bayesian logistic regression credit assignment model; to our knowledge, this is the first regret guarantee for preference-based RL. We also discuss possible avenues for extending this proof methodology to analyze other credit assignment models. Finally, we evaluate the approach empirically, showing competitive performance against existing baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2011

Preference elicitation and inverse reinforcement learning

We state the problem of inverse reinforcement learning in terms of prefe...
research
05/24/2022

Reward Uncertainty for Exploration in Preference-based Reinforcement Learning

Conveying complex objectives to reinforcement learning (RL) agents often...
research
05/26/2023

Learning Interpretable Models of Aircraft Handling Behaviour by Reinforcement Learning from Human Feedback

We propose a method to capture the handling abilities of fast jet pilots...
research
06/25/2023

Is RLHF More Difficult than Standard RL?

Reinforcement learning from Human Feedback (RLHF) learns from preference...
research
06/02/2022

Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

In this work, we propose a novel Kernelized Stein Discrepancy-based Post...
research
07/13/2020

What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design

Emerging methods for participatory algorithm design have proposed collec...
research
12/15/2022

Ungeneralizable Contextual Logistic Bandit in Credit Scoring

The application of reinforcement learning in credit scoring has created ...

Please sign up or login with your details

Forgot password? Click here to reset