Experience enrichment based task independent reward model

05/21/2017
by   Min Xu, et al.
0

For most reinforcement learning approaches, the learning is performed by maximizing an accumulative reward that is expectedly and manually defined for specific tasks. However, in real world, rewards are emergent phenomena from the complex interactions between agents and environments. In this paper, we propose an implicit generic reward model for reinforcement learning. Unlike those rewards that are manually defined for specific tasks, such implicit reward is task independent. It only comes from the deviation from the agents' previous experiences.

READ FULL TEXT
research
11/23/2022

Actively Learning Costly Reward Functions for Reinforcement Learning

Transfer of recent advances in deep reinforcement learning to real-world...
research
05/25/2021

A Comparison of Reward Functions in Q-Learning Applied to a Cart Position Problem

Growing advancements in reinforcement learning has led to advancements i...
research
02/10/2019

A Bandit Framework for Optimal Selection of Reinforcement Learning Agents

Deep Reinforcement Learning has been shown to be very successful in comp...
research
04/29/2021

Adapting to Reward Progressivity via Spectral Reinforcement Learning

In this paper we consider reinforcement learning tasks with progressive ...
research
12/21/2020

Evaluating Agents without Rewards

Reinforcement learning has enabled agents to solve challenging tasks in ...
research
05/23/2023

Video Prediction Models as Rewards for Reinforcement Learning

Specifying reward signals that allow agents to learn complex behaviors i...
research
10/18/2022

CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations

Although reinforcement learning has found widespread use in dense reward...

Please sign up or login with your details

Forgot password? Click here to reset