Generalizing Across Multi-Objective Reward Functions in Deep Reinforcement Learning

09/17/2018
by   Eli Friedman, et al.
0

Many reinforcement-learning researchers treat the reward function as a part of the environment, meaning that the agent can only know the reward of a state if it encounters that state in a trial run. However, we argue that this is an unnecessary limitation and instead, the reward function should be provided to the learning algorithm. The advantage is that the algorithm can then use the reward function to check the reward for states that the agent hasn't even encountered yet. In addition, the algorithm can simultaneously learn policies for multiple reward functions. For each state, the algorithm would calculate the reward using each of the reward functions and add the rewards to its experience replay dataset. The Hindsight Experience Replay algorithm developed by Andrychowicz et al. (2017) does just this, and learns to generalize across a distribution of sparse, goal-based rewards. We extend this algorithm to linearly-weighted, multi-objective rewards and learn a single policy that can generalize across all linear combinations of the multi-objective reward. Whereas other multi-objective algorithms teach the Q-function to generalize across the reward weights, our algorithm enables the policy to generalize, and can thus be used with continuous actions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2019

What Can Learned Intrinsic Rewards Capture?

Reinforcement learning agents can include different components, such as ...
research
12/28/2022

Lexicographic Multi-Objective Reinforcement Learning

In this work we introduce reinforcement learning techniques for solving ...
research
09/11/2019

Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning

A common approach for defining a reward function for Multi-objective Rei...
research
01/11/2020

Reward Engineering for Object Pick and Place Training

Robotic grasping is a crucial area of research as it can result in the a...
research
12/30/2021

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

Inferring reward functions from demonstrations and pairwise preferences ...
research
08/29/2023

Policy composition in reinforcement learning via multi-objective policy optimization

We enable reinforcement learning agents to learn successful behavior pol...
research
02/03/2020

Effective Diversity in Population-Based Reinforcement Learning

Maintaining a population of solutions has been shown to increase explora...

Please sign up or login with your details

Forgot password? Click here to reset