Reinforcement learning (RL) formalises the problems of evaluation and optimisation of an agent’s behaviour while interacting with an environment, based upon feedback given through a reward signal [Sutton and Barto, 1998]. A major paradigm for solving these problems is value-based RL, in which the agent predicts the expected return
– i.e. the expected discounted sum of rewards – in order to guide its behaviour. The moments or distribution of the random return have also been considered in the literature, with a variety of approaches proposing algorithms for estimating more complex distributional information[Morimura et al., 2010b, a, Prashanth and Ghavamzadeh, 2013, Tamar et al., 2016]. Recently, Bellemare et al. [2017a] used the distributional perspective to propose an algorithm, C51, which achieved state-of-the-art performance on the Atari 2600 suite of benchmark tasks. C51 is a deep RL algorithm based on categorical policy evaluation (for evaluation) and categorical Q-learning (for control), also introduced by Bellemare et al. [2017a], and it is these latter two algorithms which are at the centre of our study. We refer to these approaches as categorical distributional reinforcement learning (CDRL).
Given a state and action , C51 approximates the distribution over returns using a uniform grid over a fixed range, i.e. a categorical distribution with evenly-spaced outcomes. Analogous to how value-based approaches such as SARSA [Rummery and Niranjan, 1994] learn to predict, C51 also forms a learning target from sample transitions: reward, next state, and eventually next-state distribution over returns. However, the parallel ends here: because C51
learns a distribution, it minimises the Kullback-Leibler divergence between its target and its prediction, rather than the usual squared loss. However, the support of the target is in general disjoint from the approximation support; to account for this,Bellemare et al. [2017a] further introduced a projection step normally absent from reinforcement learning algorithms.
As a whole, the particular techniques incorporated in C51 are not explained by the accompanying theory. While the “mean process” which governs learning within C51 is described by a contractive distributional Bellman operator, there are not yet any guarantees on the behaviour of sample-based algorithms. To put things in context, such guarantees in case of estimating expected returns require a completely different mathematical formalism [Tsitsiklis, 1994, Jaakkola et al., 1994]. The effect of the discrete approximation and its corresponding projection step also remain to be quantified. In this paper we analyse these issues.
At the centre of our analysis is the Cramér distance between probability distributions. The Cramér distance is of particular interest as it was recently shown to possess many of the same properties as the Wasserstein metric, used to show the contractive nature of the distributional Bellman operator[Bellemare et al., 2017b]. Specifically, using the Cramér distance, we: (i) quantify the approximation error arising from the discrete approximation in CDRL (see Section 4.2); and (ii) develop stochastic approximation results for the sample-based case (see Section 4.3).
One of the main contributions of this paper is to establish a framework for the analysis of CDRL algorithms. This framework reveals a space of possible alternative methods (Sections 3 and 4). We also demonstrate that the fundamental property required for the convergence of distributional RL algorithms is contractivity of a projected Bellman operator, in addition to the contractivity of the Bellman operator itself as in non-distributional RL (Proposition 2). This point has parallels with the importance of the (distinct) projection operator in non-tabular RL [Tsitsiklis and Van Roy, 1997].
We begin, in Section 2, with a general introduction to distributional RL, and establish required notation. In Section 3, we give a detailed description of categorical distributional RL, and set it in the context of a new framework in which to view distributional RL algorithms. Finally, in Section 4, we undertake a detailed convergence analysis of CDRL, dealing with the approximations and parametrisations that typically must be introduced into practical algorithms. This culminates in the first proofs of convergence for sample-based CDRL algorithms.
2.1 Markov decision processes
We consider a Markov decision process (MDP) with a finite state space, a finite action space , and a transition kernel
that defines a joint distribution over immediate reward and next state given a current state-action pair. We will be concerned with stationary policies
that define a probability distribution over the action space given a current state. The full MDP is given by the collection of random variables, where is the sequence of states taken by the environment, is the sequence of actions taken by the agent, and is the sequence of rewards.
2.2 Return distributions
The return of a policy , starting in initial state and initially taking action , is defined as the random variable given by the sum of discounted rewards:
where is the discount factor. We may implicitly view the distribution of the returns as being parametrised by [Sutton et al., 1999]. Two common tasks in RL are (i) evaluation, in which the expected value of the return is sought for a fixed policy, and (ii) control, in which a policy which maximises the expected value of the returns is sought.
In the remainder of this paper, we will write the distribution of the return of policy and initial state-action pair as
We write for the collection of distributions . We highlight the change in emphasis from discussing random variables, as in (1), to directly referring to probability distributions in their own right. Although Bellemare et al. [2017a] referred to the object as a value distribution, here we favour the more technically correct name return distribution function, to highlight that is a function mapping state-action pairs to probability distributions over returns. Referring to return distributions in their own right will lead to a clearer statement of the convergence results that appear in Section 4.
2.3 The distributional Bellman operator
It is well known that expected returns satisfy the Bellman equation [Bellman, 1957, Sutton and Barto, 1998]. Bellemare et al. [2017a] showed that the return distribution function satisfies a distributional variant of the Bellman equation. This result was phrased in terms of equality in distribution between random variables. A similar approach was taken by Morimura et al. [2010a]
, in which cumulative distribution functions were used. To express the Bellman equation in terms of distributions themselves, we will need the notion of pushforward (or image) measures. We first recall the definition of these measures at the level of generality required by the development of our theory; seeBillingsley  for further details.
Given a probability distribution and a measurable function , the pushforward measure is defined by , for all Borel sets .
Intuitively, is obtained from by shifting the support of according to the map . Of particular interest in this paper will be pushforward measures obtained via an affine shift map , defined by . Such transformations also appear, unnamed, in Morimura et al. [2010b].
Using this notation, we can now restate a fundamental result which was shown by Bellemare et al. [2017a] in the language of random variables. The return distribution function associated with a policy , defined in (2), satisfies the distributional Bellman equation:
where is the distributional Bellman operator, defined by:
for all . This equation serves as the basis of distributional RL, just as the standard Bellman equation serves as the basis of non-distributional value-based RL. Bellemare et al. [2017a] established a preliminary theoretical result regarding the contractive properties of the operator . To further this analysis, we first require a particular notion of distance between collections of probability distributions, introduced in Bellemare et al. [2017a].
The -Wasserstein distance , for is defined on , the set of probability distributions with finite th moments, by:
for all , where is the set of probability distributions on with marginals and .
The supremum--Wasserstein metric is defined on by
for all .
With these definitions in hand, we may recall the following result.
Lemma 1 (Lemma 3, Bellemare et al. [2017a]).
The distributional Bellman operator is a -contraction in , for all . Further, we have, for any initial set of distributions :
This motivates distributional RL algorithms, which attempt to approximately find by taking some initial estimates of the return distributions , and iteratively computing a sequence of estimates by approximating the update step
There is also a control version of these updates, which seeks to find the return distributions associated with an optimal policy , via the following updates
where is the control version of the distributional Bellman operator, defined by
An ideal policy evaluation algorithm would iteratively compute the exact updates of (4), and inherit the resulting convergence guarantees from Lemma 1. However, full computation of the distributional Bellman operator on a return distribution function is typically either impossible (due to unknown MDP dynamics), or computationally infeasible [Bertsekas and Tsitsiklis, 1996]. In order to take the full updates in (4) or (5) and produce a practical, scalable distributional RL algorithm, several key approximations are required, namely:
stochastic approximation of the Bellman operator;
projection of the Bellman target distribution;
via a loss function.
We discuss each of these approximations in Section 3, at the same time describing our two CDRL algorithms, categorical policy evaluation and categorical Q-learning, in detail with this approximation framework in mind.
3 Categorical Policy Evaluation and Categorical Q-Learning
Our first contribution is to make explicit the various approximations, parametrisations, and assumptions implicit in CDRL algorithms. Categorical poicy evaluation approximates the update scheme (4); it produces an iterative sequence of approximate return distribution functions, updating the approximations as shown in Algorithm 1. Figure 1 illustrates the salient points of the algorithm, and contrasts them against the full updates of (4). Algorithm 1 also describes categorical Q-learning, which approximates the full updates in (5). We now discuss the structure of Algorithm 1 in more detail, with reference to the distributional RL framework introduced at the end of Section 2.3.
3.1 Distribution parametrisation
From an algorithmic perspective, it is impossible to represent the full space of probability distributions with a finite collection of parameters. Therefore a first design decision for a general distributional RL algorithm is how probability distributions should be represented in an approximate way. Formally, this requires the selection of a parametric family . CDRL uses the parametric family
of categorical distributions over some fixed set of equally-spaced supports ); see lines 13 and 14 of Algorithm 1. Other parametrisations are of course possible, such as mixtures of Diracs with varying location parameters [Dabney et al., 2018], mixtures of Gaussians, etc.
3.2 Stochastic approximation of Bellman operator
Evaluation of the distributional Bellman operator (see (3)) requires integrating over all possible next state-action-reward combinations. Some approximation is required; a popular way to achieve this in RL is by sampling a transition of the MDP. This is also the approach taken in CDRL, as shown in lines 1-8 of Algorithm 1. Here is selected either by sampling from the policy in the case of categorical policy evaluation, or as the action with the highest estimated expected returns, in the case of categorical Q-learning. In the context of categorical policy evaluation, this defines a stochastic Bellman operator, given by
where the randomness in comes from the randomly sampled transition . Note that this defines a random measure, and importantly, this random measure is equal in expectation to the true Bellman target .
3.3 Projection of Bellman target distribution
Having computed , this new distribution typically no longer lies in the parametric family ; as shown in (6), the supports of the distributions are transformed by an affine map . We therefore require a method of mapping the backup distribution function into the parametric family. That is, we require a projection operator
that may be applied to each real-valued distribution in a return distribution function. CDRL uses the heuristic projection operator(see line 10 of Algorithm 1), which was defined by Bellemare et al. [2017a] as follows for single Dirac measures:
and extended affinely to finite mixtures of Dirac measures, so that for a mixture of Diracs , we have - see the right-hand side of Figure 1. In general we will abuse notation, and use to denote the projection operator for individual distributions, and also the operator on return distribution functions , which applies the former projection to each distribution in the return distribution function.
3.4 Gradient updates
Having computed a stochastic approximation to the full target distribution, the remaining issue is how the next iterate should be defined. In C51, the approach is to perform a single step of gradient descent on the Kullback-Leibler divergence of the prediction from the target :
with respect to the parameters of - see line 12 of Algorithm 1. We also consider CDRL algorithms based on a mixture update, described in more detail in Section 4.3. The use of a gradient update, rather than a “hard” update allows for the dissipation of noise introduced in the target by stochastic approximation [Bertsekas and Tsitsiklis, 1996, Kushner and Yin, 2003]. This completes the description of CDRL in the context of the framework introduced at the end of Section 2.3; we now move on to discussing the convergence properties of these algorithms.
4 Convergence Analysis
The approximations, parametrisations, and heuristics of CDRL discussed in Section 3 yield practical, scalable algorithms for evaluation and control, but the effects of these heuristics on the theoretical guarantees that many non-distributional algorithms enjoy have not yet been addressed. In this section, we set out a variety of theoretical results for CDRL algorithms, and in doing so, emphasise several key ways in which the approximations described in Section 3 must fit together to enjoy good theoretical guarantees.
We begin by drawing a connection between the heuristic projection operator and the Cramér distance in Section 4.1. This connection then paves the way to obtaining the results of Section 4.2, which concern the properties of CDRL policy evaluation algorithms without stochastic approximation and gradient updates, observing only the consequences of the parametrisation and projection steps discussed in Sections 3.1 and 3.3. We then bring these more realistic assumptions into play in Section 4.3, and our analysis culminates in a proof of convergence of categorical policy evaluation and categorical Q-learning in the tabular setting.
4.1 Cramér geometry
We begin by recalling Lemma 1, through which Bellemare et al. [2017a] established that repeated application of the distributional Bellman operator to an initial return distribution function guarantees convergence to the true set of return distributions in the supremum-Wasserstein metric. However, once we introduce the parametrisation and projection operator of categorica policy evaluation, the operator of concern is now , the composition of the Bellman operator with the projection operator . Our first result illustrates that the presence of the projection operator is enough to break the contractivity under Wasserstein distances.
The operator is in general not a contraction in , for .
Whilst contractivity with respect to is in fact maintained, as we shall see there is a much more natural metric, the Cramér distance [Székely, 2002], with which to establish contractivity of the combined operator .
The Cramér distance between two distributions , with cumulative distribution functions respectively, is defined by:
Further, the supremum-Cramér metric is defined between two distribution functions by
The Cramér distance was recently studied as an alternative to the Wasserstein distances in the context of generative modelling [Bellemare et al., 2017b]. The Cramér distance, in fact, induces a useful geometric structure on the space of probability measures. We use this to provide a new interpretation of the heuristic projection intimately connected with the Cramér distance. The salient points of this connection are stated in Proposition 1, with full mathematical details provided in the corresponding proof in the appendix. We then use this in Section 4.2 to show that is a contraction in .
The Cramér metric endows a particular subset of with a notion of orthogonal projection, and the orthogonal projection onto the subset is exactly the heuristic projection . Consequently, is a non-expansion with respect to .
A consequence of the result above is the following, which will be useful in later sections.
Lemma 3 (Pythagorean theorem).
Let , and let . Then
A geometric illustration of the action of the composed operator is given in Figure 2, in light of the interpretation of as an orthogonal projection.
4.2 Parametrisation and projection
Having established these tools, we can now prove contractivity of the operator , and hence convergence of this variant of distributional RL in the absence of stochastic approximation.
The operator is a -contraction in . Further, there is a unique distribution function such that given any initial distribution function , we have
A natural question to ask is how the limiting distribution function , established in Proposition 2, differs from the true distribution function . In some sense, this quantifies the “cost” of using the parametrisation rather than learning fully non-parametric probability distributions. Reusing the interpretation of as an orthogonal projection, and using a geometric series argument, we may establish the following result, which echoes existing results for linear function approximation [Tsitsiklis and Van Roy, 1997].
Let be the limiting return distribution function of Proposition 2. If is supported on for all , then we have:
This establishes that as the fineness of the grid increases, we gradually recover the true return distribution function. The bound in Proposition 3 relies on a guarantee that the support of the true return distributions lie in the interval . Many RL problems come with such a guarantee, but there are also many circumstances where a priori knowledge of the scale of rewards is unavailable. It is possible to modify the proof of Proposition 3 to deal with this situation too.
Let be the limiting return distribution function of Proposition 2. Suppose is supported on an interval containing for each , and for some and for all – bounds the excess mass lying outside the region . Then we have
4.3 Stochastic approximation and gradient updates
In this section, we leverage the theory of stochastic approximation to provide convergence guarantees for sample-based distributional RL algorithms.
We will study a version of categorical policy evaluation that takes a mixture between two distributions, rather than using a KL gradient, as a means of updating the return distribution estimates. The algorithm proceeds by computing the target distribution as in Algorithm 1, but then rather than using the gradient of a KL loss, the updated return distribution is produced for some collection of learning rates according to the following rule:
That is, by taking a mixture between and . We denote this procedure as Algorithm 2, which for completeness is stated in full in Section 8 of the appendix. The question of whether convergence results hold for the KL update described in Section 3.4 remains open, and is an interesting area for further research.
4.3.1 Convergence of categorical policy evaluation
We first show that, under standard conditions, categorical policy evaluation with the mixture update rule described above is guaranteed to converge to the fixed point of the projected Bellman operator , as described in Proposition 2. We sketch out the main structure of the proof below; the full argument is given in the appendix.
In the context of policy evaluation for some policy , suppose that:
the stepsizes satisfy the Robbins-Monro conditions:
almost surely, for all ;
we have initial estimates of the distribution of returns for each state-action pair , each with support contained in .
Then, for the updates given by Algorithm 2, in the case of evaluation of the policy , we have almost sure convergence of to in , where is the limiting return distribution function of Proposition 2. That is,
The proof follows the approach of Tsitsiklis ; we combine classical stochastic approximation proof techniques with notions of stochastic dominance to prove the almost-sure convergence of the return distribution functions in . Proposition 5 is an interesting result in its own right, as it establishes a formal language to describe the monotonocity of the distributional Bellman operator, which plays an important role in control operators [e.g. Bertsekas, 2012].
We begin by showing that several variants of the Bellman operator are monotone with respect to a particular partial ordering over probability distributions known as stochastic dominance [Shaked and Shanthikumar, 1994].
Given two probability measures , we say that stochastically dominates , and write , if there exists a coupling between and (that is, a probability measure on with marginals given by and ) which is supported on the set . An equivalent characterisation states that if for the corresponding CDFs and , we have
Stochastic dominance forms a partial order over the set . We introduce a related partial order over the space of return distribution functions, , which we refer to as (element-wise) stochastic dominance. Given , we say that stochastically dominates element-wise if for each , stochastically dominates .
The distributional Bellman operator is a monotone map with respect to the partial ordering on given by element-wise stochastic dominance. Further, the Cramér projection is a monotone map, from which it follows that the Cramér-Bellman operator is also monotone.
The monotonicity of the mappings described in Proposition 5 can then be harnessed to establish a chain of lemmas, given in the appendix, mirroring the chain of reasoning in Tsitsiklis , from which Theorem 1 will follow. In the remainder of this section, we highlight a further important property of the Cramér projection which is crucial in establishing Theorem 1.
for all , for all , given that if the state is not selected for update at time . The second term,
may be interpreted as a damped version of the full distributional Bellman update, whilst the third term,
represents the noise introduced by stochastic approximation. We observe that this noise term is in fact a difference of two probability distributions (one of which is a random measure); thus, this noise term is a particular instance of a random signed measure. The Cramér projection leads to an important property of this signed measure, which is crucial in establishing the result of Theorem 1, summarised in Lemma 4.
The noise term
is a random signed measure with total mass almost surely, and with the property that when averaged over the next-step reward, state and action tuple it is equal to the zero measure almost surely:
for all .
4.3.2 Convergence of categorical Q-learning
Having established convergence of categorical policy evaluation in Theorem 1, we now leverage this to prove convergence of categorical Q-learning under similar conditions.
Suppose that Assumptions 1–2 of Theorem 1 hold, and that all unprojected target distributions arising in Algorithm 2 are supported within almost surely. Assume further that there is a unique optimal policy for the MDP. Then, for the updates given in Algorithm 2, in the case of control, we have almost sure convergence of in to some limit , and furthermore the greedy policy with respect to is the optimal policy .
Theorem 2 is particularly interesting because it demonstrates that value-based control is not only stable in the distributional case, but also that CDRL preserves the optimal policy. This is not a given: for example, if we were to replace with a nearest-neighbour-type projection we could not provide the same guarantee. What makes the CDRL projection step special in this regard is that it preserves the expected value of the unprojected target.
The C51 algorithm was empirically successful, but, as we have seen in Lemma 2, is not explained by the initial theoretical results concerning CDRL of Bellemare et al. [2017a]. We have now shown that the projected distributional Bellman operator used in CDRL inherits convergence guarantees from a different metric altogether, the Cramér distance. From Propositions 3 and 4, we see that the limiting approximation error is controlled by the granularity of the parametric distribution and the discount factor . Furthermore, we have shown that in the stochastic approximation setting this update converges both for policy evaluation and control.
An important aspect of our analysis is the role of the projection onto the set of parametrised distributions, in distributional RL. Just as existing work has studied the role of the projected Bellman operator in function approximation [Tsitsiklis and Van Roy, 1997], there is a corresponding importance for considering the effects of the projection in distributional RL.
5.1 Function approximation
Our theoretical results in Section 4 treat the problem of tabular distributional RL, with an approximate parametrisation distribution for each state-action pair. Theoretical understanding of function approximation in RL has been the focus of much research, and has significantly improved our understanding of agent behaviour. Although we believe the effects of function approximation on distributional RL are of great theoretical and empirical interest, we leave the function approximation setting as an interesting direction for future work.
5.2 Theoretically grounded algorithms
Turning theoretical results into practical algorithms can often be quite challenging. However, our results do suggest some immediate directions for potential improvements to C51. First, the convergence results for stochastic approximation suggest that an improved algorithm could be obtained by either directly minimising the Cramér distance or through a regularised KL minimisation that more closely reflects the mixture updates in Section 4.3. Second, the results of Propositions 3 and 4 indicate that if our support is densely focused around the true range of returns we should expect significantly better performance, due to the effects of the discount factor. Improving this by either prior domain knowledge or adapting the support to reflect the true return range could yield much better empirical performance.
In this paper we have introduced a framework for distributional RL algorithms, and provided convergence analysis of recently proposed algorithms. We have introduced the notion of the projected distributional Bellman operator and argued for its importance in the theory of distributional RL.
Interesting future directions from an empirical perspective include exploring the space of possible distributional RL algorithms set out in Section 3. From a theoretical perspective, the issue of how function approximation interacts with distributional RL remains an important open question.
The authors acknowledge the important contributions of their colleagues at DeepMind. Special thanks to Wojciech Czarnecki, Chris Maddison, Ian Osband and Grzegorz Swirszcz for their early suggestions and discussions. Thanks also to Clare Lyle for useful comments.
Bellemare et al. [2017a]
M. G. Bellemare, W. Dabney, and R. Munos.
A Distributional Perspective on Reinforcement Learning.
Proceedings of the 34th International Conference on Machine Learning (ICML), 2017a.
- Bellemare et al. [2017b] M. G. Bellemare, I. Danihelka, W. Dabney, S. Mohamed, B. Lakshminarayanan, S. Hoyer, and R. Munos. The Cramer Distance as a Solution to Biased Wasserstein Gradients. arXiv, 2017b.
- Bellman  R. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, USA, 1 edition, 1957.
- Bertsekas  D. P. Bertsekas. Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming. Athena Scientific, 2012.
- Bertsekas and Tsitsiklis  D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1st edition, 1996.
- Billingsley  P. Billingsley. Probability and Measure. John Wiley and Sons, second edition, 1986.
Dabney et al. 
W. Dabney, M. Rowland, M. G. Bellemare, and R. Munos.
Distributional Reinforcement Learning with Quantile Regression.In
Proceedings of the Thirty-second AAAI Conference on Artificial Intelligence, 2018.
- Jaakkola et al.  T. Jaakkola, M. I. Jordan, and S. P. Singh. On the Convergence of Stochastic Iterative Dynamic Programming Algorithms. Neural Computation, 6(6):1185–1201, 1994.
- Kushner and Yin  H. Kushner and G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2003.
- Morimura et al. [2010a] T. Morimura, M. Sugiyama, H. Kashima, H. Hachiya, and T. Tanaka. Nonparametric Return Distribution Approximation for Reinforcement Learning. In ICML, 2010a.
- Morimura et al. [2010b] T. Morimura, M. Sugiyama, H. Kashima, H. Hachiya, and T. Tanaka. Parametric Return Density Estimation for Reinforcement Learning. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2010b.
- Prashanth and Ghavamzadeh  L. A. Prashanth and M. Ghavamzadeh. Actor-Critic Algorithms for Risk-Sensitive MDPs. In NIPS, pages 252–260, 2013.
- Rummery and Niranjan  G. A. Rummery and M. Niranjan. On-line Q-learning using Connectionist Systems. Technical report, Cambridge University Engineering Department, 1994.
- Shaked and Shanthikumar  M. Shaked and J. G. Shanthikumar. Stochastic Orders and their Applications. Academic Press, 1994.
- Sutton and Barto  R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
- Sutton et al.  R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems (NIPS), 1999.
- Székely  G. J. Székely. E-statistics: The Energy of Statistical Samples. Technical report, Department of Mathematics and Statistics, Bowling Green State University, 2002.
Tamar et al. 
A. Tamar, D. Di Castro, and S. Mannor.
Learning the Variance of the Reward-to-go.J. Mach. Learn. Res., 17(1):361–396, Jan. 2016. ISSN 1532-4435.
- Tsitsiklis  J. N. Tsitsiklis. Asynchronous Stochastic Approximation and Q-Learning. Machine Learning, 16(3):185–202, Sep 1994.
- Tsitsiklis and Van Roy  J. N. Tsitsiklis and B. Van Roy. An Analysis of Temporal-Difference Learning with Function Approximation. Technical report, IEEE Transactions on Automatic Control, 1997.
7 A supporting result
We first present an alternative characterisation of the projection operator which will be useful for the analysis that follows. Throughout, for a probability measure , we write for its CDF.
For each , define to be the (possibly asymmetric) hat function centered in defined by
Then defining for all probability distributions , is consistent with the earlier definition in (7) for mixtures of Diracs. Further, is equal to the average value of in the interval , for , and .
The consistency of the definition with (7) follows immediately by observing directly that the definitions agree when is a Dirac measure, and then observing that the definition of in the statement of the proposition is also affine.
For the characterisation of for , we note that
as required. Finally, since is supported on , it immediately follows that . ∎
8 Mixture update version of categorical policy evaluation and categorical Q-learning
9 Proof of results in Section 4
We exhibit a simple counterexample; it is enough to demonstrate that can act as an expansion. Take , and consider two Dirac delta distributions, and . We have . Now , and , and hence . ∎
We begin by setting out a Hilbert space structure of a subset of . Let
be the vector space of all finite signed measures on. First, observe that the following subspace of signed measures:
where for each , is isometrically isomorphic to a subspace of the Hilbert space with inner product given by
Now consider the affine space (i.e. the translation of in by the measure ). This affine space consists of signed measures of total mass , with sufficiently quickly decaying tails. In particular, it contains the set of probability measures satisfying
As is an affine translation of a Hilbert space, it inherits the inner product defined in (10) from , which is now defined for differences of elements. Now consider the affine subspace consisting of measures supported on . It is clear that this is a closed affine subspace (since it is finite-dimensional), and therefore there exists an orthogonal projection (with respect to the inner product defined above) onto this subspace, which we denote by . Given a probability measure , , where the satisfy , and subject to this constraint, minimise . But note that
By construction, is constant on the open intervals for , and also on the intervals and . Therefore , and hence itself, is determined by the values of for . The optimal values (i.e. those minimising (11)) are easily verified to be: , and is equal to the average of on the interval , for . Note then that is a probability distribution (since is non-decreasing), and in fact matches the characterisation of obtained in Proposition 6. Therefore we have established that is exactly orthogonal projection in the affine Hilbert space