# Sparse Reward Processes

## Authors

• 30 publications
08/09/2017

### Decoupled Learning of Environment Characteristics for Safe Exploration

Reinforcement learning is a proven technique for an agent to learn a tas...
06/14/2020

### Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms

Multi-agent deep reinforcement learning (MARL) suffers from a lack of co...
10/21/2019

### Exploration via Sample-Efficient Subgoal Design

The problem of exploration in unknown environments continues to pose a c...
11/06/2019

### Distributional Reward Decomposition for Reinforcement Learning

Many reinforcement learning (RL) tasks have specific properties that can...
11/30/2018

### BlockPuzzle - A Challenge in Physical Reasoning and Generalization for Robot Learning

In this work we propose a novel task framework under which a variety of ...
09/07/2016

### Unifying task specification in reinforcement learning

Reinforcement learning tasks are typically specified as Markov decision ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This paper introduces sparse reward processes. These capture the problem of acting in an unknown environment, with an arbitrary unknown sequence of future objectives. The question is: how to act so as to perform well in the current objective, while at the same time acquiring knowledge that might be useful for future objectives? It is thus analogous to a number of real-world problems with high uncertainty about future tasks, as well as the more philosophical problem of motivating the utility of curiosity in human behaviour.

We formulate this setting in terms of a multi-stage game between a learning agent and an opponent of unknown type. The agent acts in an unknown Markovian environment, which is the same in every stage. At the beginning of each stage, a payoff function is chosen by the opponent, which determines the agent’s utility for that stage. The agent must act not only so as to maximise expected utility at each stage, but also so that he can be better prepared for whatever payoff function the opponent will select at the next stage.

We call such problems sparse reward processes, because of two types of sparseness. The first refers to payoff scarcity: the payoff available at every stage is bounded, while the agent wants to maximise the total payoff across stages. The second refers to the fact that the payoff function is sparse for an adversarial opponent. We posit that this is a good model of life-long learning in uncertain environments, where while resources must be spent learning about currently important tasks, there is also the need to allocate effort towards learning about aspects of the world which are not relevant at the moment. This is due to the fact that unpredictable future events may lead to a change of priorities for the decision maker. Thus, in some sense, the model “explains” the necessity of curiosity.

While our main contribution is the introduction of the problem, we also analyse some basic properties. We show that when the opponent is nature, the problem becomes an unknown MDP. For adversarial opponents, a good strategy for a two-stage version of the game is to maximise the information gain with respect to the MDP model, linking our formulation to exploration heuristics such as compression progress

(Schmidhuber, 1991), information gain (Lindley, 1956) and approximations to the value of information (Koller and Friedman, 2009, Sec. 23.7). For the general adversarial case, we show that either sampling from the posterior Strens (2000) or confidence-bound based approaches (Auer et al., 2002; Jacksh et al., 2010) perform well compared to a greedy policy. However, when the opponent is nature, a greedy policy performs very well, as the payoff stochasticity forces the agent to explore.

The next section introduces the setting and formalises the environment, the payoff, the policy and the complete sparse reward process. Sec. 3 examines the properties that arise for the two opponent types: nature and adversarial. Sec. 4 briefly explains two algorithms for acting in SRPs, derived from two well-known reinforcement learning exploration algorithms based on confidence bounds and Bayesian sampling respectively. The experimental setup is described in Sec. 5, while Sec. 6 concludes the paper with a discussion of related work and links to other problems in reinforcement learning and decision theory.

## 2 Setting

The setting can be formalised as a multi-stage game between the agent and an opponent, on a stochastic environment . At the beginning of the -th stage the opponent chooses a payoff , which he reveals to the agent, who then selects an arbitrary policy . It then acts in using until the current stage enters a terminating state. This interaction results in a random sequence of state visits , whose utility for the agent is . The agent’s goal to minimise the total expected regret , where is the expected utility and is the maximum expected utility for that stage.

If the dynamics are known to the agent, then selecting maximising the total expected payoff, only requires playing the optimal strategy for each stage and disregarding the remaining stages. When is unknown, however, learning about the environment is important for performing well in the later stages. The setting then becomes an interesting special case of the exploration-exploitation problem.

### 2.1 The environment

At every stage, the agent is acting within an unknown environment. We assume that the opponent, has no control over the environment’s dynamics and that these are constant throughout all stages. More specifically, we define the environment to be a controlled Markov process:

###### Definition 1.

A controlled Markov process (CMP) is a tuple , with state space , action space , and transition kernel

 T≜{τ(⋅∣s,a) ∣∣ s∈S,a∈A},

indexed in such that

is a probability measure

111We assume the measurability of all sets with respect to some appropriate -algebra. This will usually be the Borel algebra of the set . on . If at time the environment is in state and the agent chooses action , then the next state is drawn with a probability independent of previous states and actions:

 Pν(st+1∈S∣st,at)=τ(S∣st,at)S⊂S. (2.1)

Finally, we shall use for the class of CMPs.

In the above, and throughout the text, we use the following conventions. We employ to denote the probability of events under a process , while we use and to represent sequences of variables. Similarly denotes product spaces, and denotes the set of all sequences of states. Arbitrary-length sequences in will be denoted by .

Throughout this paper, we assume that the transition kernel is not known to the agent, who must estimate it through interaction. On the other hand, the payoff function, chosen by the opponent, is revealed to the agent at the beginning of each stage.

### 2.2 The payoff

At the -th stage, a payoff function is chosen by the opponent. This encodes how desirable a state sequence is to the agent for the task. In particular if are two state sequences, then is preferred to in round if and only if . As a simple example, consider a such that, sequences going through a certain state have a payoff of , while the remaining have a payoff of .

The usual reinforcement learning (RL) setting can be easily mapped to this. Recall that in RL the agent is acting in a Markov decision process

(Puterman, 2005) (MDP). This is a CMP equipped with a set of distributions on rewards . In the infinite-horizon, discounted reward setting, the utility is defined as the discounted sum of rewards , where is a discount factor. We can map this to our framework, by setting: to be the payoff for a state sequence . While the theoretical development applies to general payoff functions, the experimental results and algorithms use the RL setting.

### 2.3 The policy

After the payoff is revealed to the decision maker, he chooses a policy , which he uses to interact with the environment. The CMP and the payoff function jointly define an MDP, denoted by . The agent’s policy selects actions with distribution , meaning that the policy is not necessarily stationary. Together with the Markov decision process , it defines a distribution on the sequence of states, such that:

 Pμk,πk(st+1∈S∣st) =∫Aτ(S∣a,st)dπ(a∣st).

This interaction results in a sequence of states , whose utility to the agent is: , . Since the sequence of states is stochastic, we set the value of each stage to the expected utility:

 (2.2)

where is the probability measure on resulting from using policy on CMP . Finally, let us define the oracle policy for stage :

###### Definition 2 (Oracle stage policy).

Given the process and the payoff at stage , the oracle policy is .

This policy is normally unattainable by the agent, since is unknown. The agent’s goal is to minimise the total expected regret 222We use a slightly different notion of regret from previous work. Instead of using the total accumulated reward, as in (Jacksh et al., 2010), we consider the total expected utility across stages. But, if one were to see the payoff obtained at every stage as the reward, the two measures of regret would be equivalent. relative to the oracle:

 LK≜K∑k=1V(ρk,π∗(ν,ρk))−Vk (2.3)

### 2.4 Sparse reward processes

The complete sparse reward process is a special case of a stochastic game. However, we are particularly interested in processes where only few states have payoffs. We model this by mapping each payoff function to a finite measure on .

###### Definition 3.

A sparse reward process is a multi-stage stochastic game with stages, where the -th stage is a Markov decision process , whose payoff function , is revealed to the agent immediately after stage is complete. The agent chooses policy , with expected utility . The Markov decision process terminates at time the stage ends, if is a terminal state and with fixed termination probability if is not a terminal state.

The process is called -sparse for a measure on if for every , the payoff measure on , defined as , , satisfies . The agent’s goal is to find a sequence maximising .

The termination probability is equivalent to an infinite horizon discounted reward reinforcement learning problem (see Puterman, 2005). Bounding the total payoff forces the rewards available at most state sequences to be small (though not necessarily zero). Finally, it ensures that the opponent cannot place arbitrarily large rewards in certain parts of the space, and so cannot make the regret arbitrarily large. Throughout the paper, we take to be the uniform measure. The construction also enables much of the subsequent development, through the following lemma:

###### Lemma 1.

Given a payoff function for which there exists a payoff measure satisfying the conditions of Def. 3 for some , the utility of any policy on the MDP , can be written as:

 Eπ,μU=∫S∗pπ,ν(s)dλ(s), (2.4)

where is the probability (density) of (with respect to ) under the policy and the environment . We assume that always exists, but is not necessarfily finite.

###### Proof.

Via change of measure: . ∎

## 3 Properties

The optimality of an agent policy depends on the assumptions made about the opponent. In a worst-case setting, it is natural to view each stage as a zero-sum game, where the agent’s regret is the opponent’s gain. If the opponent is nature, then the sparse reward process can be seen as an MDP. This is also true in the case where we employ a prior over the opponent’s type.

### 3.1 When the opponent is nature

Consider the case when the opponent selects the payoffs by drawing them from some fixed, but unknown distribution with measure , parametrised by , such that: , . In that case, the Bayes-optimal strategy for the agent is to maintain a belief on and solve the problem with backwards induction DeGroot (1970), if possible. This is because of the following fact:

###### Theorem 1.

When the opponent is Nature, the SRP is an MDP.

###### Proof.

We prove this by construction. For a set of reward functions , the state space of the MDP can be factored into the reward function and the state of the dynamics, so . If there are reward functions, we can write the state space as . Let the action space be , such and a set of bijections . In addition, for any all states , the transition probabilities obey: and if and otherwise. It is easy to verify that this agrees with Def. 3. ∎

Unfortunately, the Bayes-optimal solution is usually intractable (Gittins, 1989; DeGroot, 1970; Duff, 2002).

### 3.2 When the opponent is adversarial

We look at the problem from the perspective of Bayesian experimental design. In particular, the agent has a belief, expressed as a measure over . Then, the -expected utility of any policy is:

 Eξ,πU=∫N∫S∗U(s)dPπν(s)dξ(ν). (3.1)

Let and be the probability measures on arising from the optimal policy given the full CMP and given a particular belief over CMPs respectively, assuming known payoffs . The opponent can take advantage of the uncertainty and select a payoff function that maximises our loss relative to the optimal policy:

 ℓk(ξ,μ)≜maxλ∫S∗(P∗ν−P∗ξ)dλk. (3.2)

This implies that the opponent should maximise the payoff for sequences with the largest probability gap between the - and -optimal policies. To make this non-trivial, we have restricted the payoff functions to . In this case, maximising requires setting for the set of sequences with the largest gap, and everywhere else. This the second type of sparseness that SRPs have.

We now show that in a special two-stage version of our game, a strategy that maximises the expected information gain minimises a bound on our expected loss. First, we recall the definition of Lindley (1956):

###### Definition 4 (Expected information gain).

Assume some prior probability measure

on a parameter space , and a set of experiments , indexing a set of measures on . The expected information gain of the experiment consisting of drawing an observation from the unknown is:

 G(π,ξ)≜∫N∫XlnPπν(x)Pπξ(x)dPπν(x)dξ(ν), (3.3)

where is the marginal .

In our case, the parameter space is the set of environment dynamics while the observation set is the the set of state-action sequences .

###### Theorem 2.

Consider a two-stage game, where there for the the first stage, . Then, maximising the expected information gain, in sufficient to minimise the expected regret.

###### Proof.

Through the definition of the stage loss (3.2), and as :

 (3.4)

where is the -norm with respect to . If our initial belief is and the (random) posterior after the first stage is , the expected loss of policy is given by: . Finally, since for any measures it holds that , we have: . Via Jensen’s inequality we obtain that . ∎

Thus, choosing a policy that maximises the expected information gain, minimises the expected worst-case loss at the next stage. This is in broad agreement with past ideas of relating curiosity to gaining knowledge about the environment (e.g. work such as (Schmidhuber, 1991)). Consequently, pure information-gathering strategies can have good quality guarantees in this two-stage adversarial game.

For more general games, we must employ other strategies, however, as we need to balance information gathering (exploration) with obtaining rewards in the current stage (exploitation). Unfortunately, even finding the policy that maximises (3.3) is as hard as finding the Bayes-optimal policy. For this reason, in the next section we consider approximate algorithms.

## 4 Algorithms

We use two simple algorithms for SRPs, derived from two well-known strategies for exploration in bandit problems and reinforcement learning in general. The first, Upper Confidence bound SRP (UCSRP, Alg. 1) chooses policies based on simple confidence bounds, similarly to UCB (Auer et al., 2002) for bandit problems and UCRL (Jacksh et al., 2010) for general reinforcement learning. The second, Bayesian Thompson sampling (BTSRP, Alg. 2), chooses a policy by drawing samples from a posterior distribution, as in Strens (2000). To simplify the exposition, we restrict our attention to some arbitrary stage and consider a setting where we have a finite set of policies .

UCSRP (Alg. 1) uses confidence regions. An abstract view of the method is the following. For any policy , let the empirical measure on be , and let:

 Nϵ(^Pπ)≜{Q ∣∣ ∥Q−^Pπ∥≤ϵ} (4.1)

be a confidence region around the empirical measure, where is the -norm with respect to . Then we define the optimistic value:

 V+π≜max{EQρ ∣∣ Q∈Nϵ(^Pπ)} (4.2)

to be the value within the interval maximising the expected payoff. This can be seen as an optimistic evaluation of the policy that holds with high probability and we choose . For RL problems, there is no need to evaluate all policies. The algorithm can be implemented efficiently via the augmented MDP construction in Jacksh et al. (2010).

BTSRP (Alg. 2) draws a candidate CMP from the belief at stage , and then calculates the stationary policy that is optimal for . At the end of the stage, the belief is updated via Bayes’s theorem:

 ξk+1(B) =∫BPν(s∣a)dξk(ν)∫NPν(s∣a)dξk(ν), (4.3)

This type of Thompson sampling (Thompson, 1933) performs well in multi-armed bandit problems (agrawal:thompson), but its general properties are unknown.

###### Lemma 2.

Consider a payoff function with corresponding payoff measure . Assume that is such that confidence regions hold, i.e. that for all . For UCSRP to choose a sub-optimal policy , it sufficient that:

 E(ρ∣Pπ∗)≤E(ρ∣Pπ)+2∫cπdλ.
###### Proof.

Since UCSRP always chooses maximising , if we choose a sub-optimal then it must hold that . Since the confidence regions hold, , and . Consequently:

 E(ρ∣Pπ∗) ≤E(ρ∣P+π)=∫(^Pπ+cπ)dλ ≤∫(Pπ+cπ)dλ=E(ρ∣Pπ)+2∫cπdλ

###### Theorem 3.

Let be the relevant signed measure for policy in stage . Assume that s.t. , with .

###### Proof.

It will be convenient to use to denote the value of for payoff function

. This has the usual vector meaning. Let

be a sequence of payoffs and let be an optimal policy at stage in hindsight, and let be our actual policy for that stage. Then the regret after stages, , is bounded as follows:

 LK ≤maxρK∑k=1(ρ′kπ∗k−ρ′kπk) =maxρK∑k=1(ρ′kπ∗k−ρ′k∑π∈PπI{πk=π}) ≤∑π∈PK∑k=1maxρkI{πk=π}ρ′k(π∗k−π) ≤∑π∈PK∑k=1maxρkI{ϵπ,k≥Δπ,k}Δπ,k

where , . ∎

The actual shape of the confidence region, for UCSRP, and the belief, for BTSRP, depend on the model we are using. In general, they have the form , where is the number of times the -th policy was chosen and , and , but can be tighter if there is an interrelationship between policies.

In both cases, a new stationary policy is selected at the beginning of each stage. We consider two types of games, for which we employ slightly different versions of the main algorithms. In the first game, each stage terminates after the first action is taken. In the second game, each stage terminates only with constant probability at every time-step.

## 5 Experiments

We consider games having a total of stages. In each stage, the agent observes an payoff function of the form and then selects a policy . The environment is Markov, and the stage terminates with fixed probability , known to the agent.

Confidence intervals for UCSRP can be constructed via the bound of Weissman et al. (2003) on the norm of deviations of empirical estimates of multinomial distributions. In order to construct an upper confidence bound policy efficiently, we employ the method of UCRL (Jacksh et al., 2010), This solves an augmented MDP where the action space is enlarged to additionally select between high-probability MDPs. This guarantees that the policy acts according to the most optimistic MDP in the high-probability region, as required by UCSRP.

For the BTSRP policy, we maintain a product-Dirichlet distribution (see for example DeGroot (1970)

) on the set of multinomial distributions for all state-action pairs and a product of normal-gamma distributions for the rewards. We then draw sample MDPs by drawing parameters from each individual part of the product prior.

### 5.1 Opponents

For reasons of tractability, and better correspondence with the reinforcement larning setting, the opponents we consider consider only additive payoff functions, such that the same reward is always obtained when visiting state and the payoff of a sequence of states is simply . We consider two types of opponents, nature, and a myopic adversary.

#### Nature.

In this case, the reward functions are sampled uniformly such that .

This opponent has knowledge of , and also maintains the empirical estimate . Assuming that the agent’s estimates must be close to the empirical estimate, the payoff is selected to maximise the stage loss (3.2). This is a sparse payoff, as explained in Sec. 3.2, based on the empirical estimate rather than the (unknown) agent’s belief:

 ρk∈argmaxρV(ρ,π∗(ν,ρ))−V(ρ,π∗(^ν,ρ)). (5.1)

### 5.2 Results

The results are summarised in Fig. 1. These also include a greedy agent, i.e. the stationary policy which maximises payoff for the current stage in empirical expectation. In both cases, the regret suffered by the greedy agent grows linearly, while that of BTSRP and UCSRP grows slowly for adversarial opponents. However, when the opponent is nature this is no longer the case. This is due to the fact that the distribution of payoffs provides a natural impetus for exploration even for the greedy agent.

## 6 Discussion

We introduced sparse reward processes, which capture the problem of acting in an unknown environment with arbitrarily selected future objectives. We have shown that, in an two-stage adversarial problem, a good strategy is to maximise the expected information gain. This links with previous work on curiosity and statistical decision theory. In fact, the connection of information gain to multiple tasks had been arguably recognised by Lindley (1956)

…although indisputably one purpose of experimentation is to reach decisions, another purpose is to gain knowledge about the state of nature (that is, about the parameters) without having specific actions in mind.

We have evaluated three algorithms on various problem instances. Overall, when the opponent is nature, even the greedy strategy performs relatively well. This is because it is forced to explore the environment by the sequence of payoffs. However, an adversarial opponent necessitates the use of the more sophisticated algorithms, which tend to explore the environment. This is partially explained by results in the related setting of multi-armed bandit problems with covariates Sarkar:one-armed-bandit-covariates:1991; Yang and Zhu (2002); Pavlidis et al. (2008); Rigollet and Zeevi (2010). There, again the payoff function is given at the beginning of every stage. In that setting, however, the opponent is nature and, more importantly, the only thing observed after an action is chosen is a noisy reward signal. So, in some sense, it is a harder problem than the one considered herein (and indeed Rigollet and Zeevi (2010) prove a lower bound). The one-armed covariate bandit Sarkar:one-armed-bandit-covariates:1991 for an exponential family model, and proves that a myopic policy is asymptotically optimal, in a discounted setting. This ties in very well with our results on problems where the opponent is nature.

Finally, SRPs are related to other multi-task learning settings. For example (Lugosi et al., 2008), consider the problem of online multi-task learning with hard constraints. That is, at every round, the agent takes an action in each and every task, but there are some constraints which reflect the tasks’ similarity. Somewhat closer to SRPs is the game-theoretic setting of mannor:geometric-multi-criterion:jmlr, where again the agent is solving a multi-objective problem where the goal is that a reward vector approaches a target set. Finally, there is a close relation to the problem of learning with multiple bandits (Dimitrakakis and Lagoudakis, 2008; Gabillon et al., 2011). Essentially, this problem involves finding near-optimal policies for a number of possibly related sub-problem within a search budget. In (Gabillon et al., 2011) the tasks are unrelated bandit problems, while in (Dimitrakakis and Lagoudakis, 2008) the tasks are actually different states of a Markov decision process and the goal is to find the best initial actions given a rollout policy.

Finally, our experimental results show that a greedy policy is a good strategy when the payoff sequence is (uniformly) stochastic. This naturally encourages exploration, even for non-curious agents, by forcing them to visit all states frequently. UCSRP and BTSRP, which explore naturally, perform much better for adversarial payoffs. Then the greedy player suffers linear total regret. Consequently, we may conclude that curiosity is not in fact necessary when the constant change of goals forces exploration upon the agent.

## References

• Auer et al. [2002] Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2/3):235–256, 2002.
• DeGroot [1970] Morris H. DeGroot. Optimal Statistical Decisions. John Wiley & Sons, 1970.
• Dimitrakakis and Lagoudakis [2008] Christos Dimitrakakis and Michail G. Lagoudakis. Rollout sampling approximate policy iteration. Machine Learning, 72(3):157–171, September 2008. Presented at ECML’08.
• Duff [2002] Michael O’Gordon Duff. Optimal Learning Computational Procedures for Bayes-adaptive Markov Decision Processes. PhD thesis, University of Massachusetts at Amherst, 2002.
• Gabillon et al. [2011] Victor Gabillon, Mohammad Ghavamzadeh, Alessandro Lazaric, and Sébastien Bubeck. Multi-bandit best arm identification. In NIPS 2011, 2011.
• Gittins [1989] C. J. Gittins. Multi-armed Bandit Allocation Indices. John Wiley & Sons, New Jersey, US, 1989.
• Jacksh et al. [2010] Thomas Jacksh, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010.
• Koller and Friedman [2009] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT Press, 2009.
• Lindley [1956] D. V. Lindley. On a measure of the information provided by an experiment. Annals of Mathematical Statistics, 27(4):986–105, 1956.
• Lugosi et al. [2008] Gábor Lugosi, Omiros Papaspiliopoulos, and Gilles Stoltz. Online multi-task learning with hard constraints. In COLT 2008, 2008.
• Pavlidis et al. [2008] N.G. Pavlidis, D.K. Tasoulis, and D.J. Hand. Simulation studies of multi-armed bandits with covariates. In Tenth International Conference on Computer Modeling and Simulation, pages 493–498. IEEE, 2008.
• Puterman [2005] Marting L. Puterman. Markov Decision Processes : Discrete Stochastic Dynamic Programming. John Wiley & Sons, New Jersey, US, 2005.
• Rigollet and Zeevi [2010] P. Rigollet and A. Zeevi. Nonparametric bandits with covariates. In Adam Tauman Kalai and Mehryar Mohri, editors, COLT, pages 54–66. Omnipress, 2010.
• Schmidhuber [1991] J. Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. 1991.
• Strens [2000] Malcolm Strens. A bayesian framework for reinforcement learning. In ICML 2000, pages 943–950, 2000.
• Thompson [1933] W.R. Thompson. On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of two Samples. Biometrika, 25(3-4):285–294, 1933.
• Weissman et al. [2003] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M.J. Weinberger. Inequalities for the deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003.
• Yang and Zhu [2002] Yuhong Yang and Dan Zhu. Randomized allocation with nonparametric estimation for a multi-armed bandit problem with covariates. The Annals of Statistics, 30(1):pp. 100–121, 2002. ISSN 00905364.