Delayed Rewards Calibration via Reward Empirical Sufficiency

by   Yixuan Liu, et al.

Appropriate credit assignment for delay rewards is a fundamental challenge for reinforcement learning. To tackle this problem, we introduce a delay reward calibration paradigm inspired from a classification perspective. We hypothesize that well-represented state vectors share similarities with each other since they contain the same or equivalent essential information. To this end, we define an empirical sufficient distribution, where the state vectors within the distribution will lead agents to environmental reward signals in the consequent steps. Therefore, a purify-trained classifier is designed to obtain the distribution and generate the calibrated rewards. We examine the correctness of sufficient state extraction by tracking the real-time extraction and building different reward functions in environments. The results demonstrate that the classifier could generate timely and accurate calibrated rewards. Moreover, the rewards are able to make the model training process more efficient. Finally, we identify and discuss that the sufficient states extracted by our model resonate with the observations of humans.



page 7


Markov Rewards Processes with Impulse Rewards and Absorbing States

We study the expected accumulated reward for a discrete-time Markov rewa...

Reinforcement Learning with Goal-Distance Gradient

Reinforcement learning usually uses the feedback rewards of environmenta...

Shaping Proto-Value Functions via Rewards

In this paper, we combine task-dependent reward shaping and task-indepen...

Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing

In cooperative multi-agent reinforcement learning (MARL), how to design ...

Reconciling Rewards with Predictive State Representations

Predictive state representations (PSRs) are models of controlled non-Mar...

Calibration of Distributionally Robust Empirical Optimization Models

In this paper, we study the out-of-sample properties of robust empirical...

Learning Dense Reward with Temporal Variant Self-Supervision

Rewards play an essential role in reinforcement learning. In contrast to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.