
Preference elicitation and inverse reinforcement learning
We state the problem of inverse reinforcement learning in terms of prefe...
read it

Human Preference Scaling with Demonstrations For Deep Reinforcement Learning
The current reward learning from human preferences could be used for res...
read it

What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design
Emerging methods for participatory algorithm design have proposed collec...
read it

Towards Practical Credit Assignment for Deep Reinforcement Learning
Credit assignment is a fundamental problem in reinforcement learning, th...
read it

Pairwise Weights for Temporal Credit Assignment
How much credit (or blame) should an action taken in a state get for a f...
read it

Decomposition Strategies for Constructive Preference Elicitation
We tackle the problem of constructive preference elicitation, that is th...
read it

Preferences Implicit in the State of the World
Reinforcement learning (RL) agents optimize only the features specified ...
read it
Dueling Posterior Sampling for PreferenceBased Reinforcement Learning
In preferencebased reinforcement learning (RL), an agent interacts with the environment while receiving preferences instead of absolute feedback. While there is increasing research activity in preferencebased RL, the design of formal frameworks that admit tractable theoretical analysis remains an open challenge. Building upon ideas from preferencebased bandit learning and posterior sampling in RL, we present Dueling Posterior Sampling (DPS), which employs preferencebased posterior sampling to learn both the system dynamics and the underlying utility function that governs the user's preferences. Because preference feedback is provided on trajectories rather than individual state/action pairs, we develop a Bayesian approach to solving the credit assignment problem, translating user preferences to a posterior distribution over state/action reward models. We prove an asymptotic noregret rate for DPS with a Bayesian logistic regression credit assignment model; to our knowledge, this is the first regret guarantee for preferencebased RL. We also discuss possible avenues for extending this proof methodology to analyze other credit assignment models. Finally, we evaluate the approach empirically, showing competitive performance against existing baselines.
READ FULL TEXT
Comments
There are no comments yet.