Reinforcement with Fading Memories

by   Kuang Xu, et al.

We study the effect of imperfect memory on decision making in the context of a stochastic sequential action-reward problem. An agent chooses a sequence of actions which generate discrete rewards at different rates. She is allowed to make new choices at rate β, while past rewards disappear from her memory at rate μ. We focus on a family of decision rules where the agent makes a new choice by randomly selecting an action with a probability approximately proportional to the amount of past rewards associated with each action in her memory. We provide closed-form formulae for the agent's steady-state choice distribution in the regime where the memory span is large (μ→ 0), and show that the agent's success critically depends on how quickly she updates her choices relative to the speed of memory decay. If β≫μ, the agent almost always chooses the best action, i.e., the one with the highest reward rate. Conversely, if β≪μ, the agent chooses an action with a probability roughly proportional to its reward rate.


page 1

page 2

page 3

page 4


Collaborative Multi-agent Stochastic Linear Bandits

We study a collaborative multi-agent stochastic linear bandit setting, w...

Learning Probabilistic Reward Machines from Non-Markovian Stochastic Reward Processes

The success of reinforcement learning in typical settings is, in part, p...

Prophet Inequality with Competing Agents

We introduce a model of competing agents in a prophet setting, where rew...

Multi-agent Time-based Decision-making for the Search and Action Problem

Many robotic applications, such as search-and-rescue, require multiple a...

Episodic Curiosity through Reachability

Rewards are sparse in the real world and most today's reinforcement lear...

'Indifference' methods for managing agent rewards

Indifference is a class of methods that are used to control a reward bas...

A conversion between utility and information

Rewards typically express desirabilities or preferences over a set of al...