Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization

01/18/2023
by   Lucas N. Alegre, et al.
0

Multi-objective reinforcement learning (MORL) algorithms tackle sequential decision problems where agents may have different preferences over (possibly conflicting) reward functions. Such algorithms often learn a set of policies (each optimized for a particular agent preference) that can later be used to solve problems with novel preferences. We introduce a novel algorithm that uses Generalized Policy Improvement (GPI) to define principled, formally-derived prioritization schemes that improve sample-efficient learning. They implement active-learning strategies by which the agent can (i) identify the most promising preferences/objectives to train on at each moment, to more rapidly solve a given MORL problem; and (ii) identify which previous experiences are most relevant when learning a policy for a particular agent preference, via a novel Dyna-style MORL method. We prove our algorithm is guaranteed to always converge to an optimal solution in a finite number of steps, or an ϵ-optimal solution (for a bounded ϵ) if the agent is limited and can only identify possibly sub-optimal policies. We also prove that our method monotonically improves the quality of its partial solutions while learning. Finally, we introduce a bound that characterizes the maximum utility loss (with respect to the optimal solution) incurred by the partial solutions computed by our method throughout learning. We empirically show that our method outperforms state-of-the-art MORL algorithms in challenging multi-objective tasks, both with discrete and continuous state spaces.

READ FULL TEXT
research
08/21/2019

A Generalized Algorithm for Multi-Objective Reinforcement Learning and Policy Adaptation

We introduce a new algorithm for multi-objective reinforcement learning ...
research
06/22/2022

Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer

In many real-world applications, reinforcement learning (RL) agents migh...
research
12/30/2021

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

Inferring reward functions from demonstrations and pairwise preferences ...
research
11/25/2020

Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning

In this paper we consider multi-objective reinforcement learning where t...
research
04/30/2023

Scaling Pareto-Efficient Decision Making Via Offline Multi-Objective RL

The goal of multi-objective reinforcement learning (MORL) is to learn po...
research
01/15/2014

An Anytime Algorithm for Optimal Coalition Structure Generation

Coalition formation is a fundamental type of interaction that involves t...
research
02/21/2018

Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making

In multi-objective decision planning and learning, much attention is pai...

Please sign up or login with your details

Forgot password? Click here to reset