A Tensor Network Approach to Finite Markov Decision Processes

02/12/2020 ∙ by Edward Gillman, et al. ∙ 0

Tensor network (TN) techniques - often used in the context of quantum many-body physics - have shown promise as a tool for tackling machine learning (ML) problems. The application of TNs to ML, however, has mostly focused on supervised and unsupervised learning. Yet, with their direct connection to hidden Markov chains, TNs are also naturally suited to Markov decision processes (MDPs) which provide the foundation for reinforcement learning (RL). Here we introduce a general TN formulation of finite, episodic and discrete MDPs. We show how this formulation allows us to exploit algorithms developed for TNs for policy optimisation, the key aim of RL. As an application we consider the issue - formulated as an RL problem - of finding a stochastic evolution that satisfies specific dynamical conditions, using the simple example of random walk excursions as an illustration.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, machine learning methods have found increasing application in various branches of physics, such as the detection of phase transitions

(Torlai & Melko, 2016; Rem et al., 2019) or variational methods for quantum theory (Nagy & Savona, 2019; Vicentini et al., 2019; Hartmann & Carleo, 2019; Yoshioka & Hamazaki, 2019). Similarly, techniques and concepts of physics have found application in machine learning. A fruitful example are tensor networks (TNs) (Montangero, 2018). Originally applied in quantum many body physics as variational approaches for ground-state approximation, TNs provide a way to efficiently parametrise important low-dimensional manifolds – such as those with short-range correlations – in otherwise unmanageably high-dimension spaces (Eisert et al., 2010; Eisert, 2013; Brandão & Horodecki, 2015; Huang, 2019). Together with a powerful set of optimisation algorithms - known broadly as density-matrix-renormalisation-group (DMRG) (White, 1992, 1993; Schollwöck, 2011) - TN methods have become state-of-the-art in several areas of physics, and continue to develop rapidly in others (Orús, 2019).

Given the nature of the problems tackled by TNs in physics, it is natural to consider them for machine learning problems, where similar issues of finding and optimising efficient parametrisations of relevant manifolds in high-dimensional spaces are key. Indeed, TNs have been applied to image classification (Stoudenmire & Schwab, 2016; Sun et al., 2019; Efthymiou et al., 2019), unsupervised learning (Han et al., 2018; Stoudenmire, 2018)

, deep learning

(Levine et al., 2019; Gao et al., 2019) and probabilistic graphical models (Glasser et al., 2018, 2019).

Despite this rapid progress, little has been done in applying TNs to reinforcement learning (RL) (Sutton & Barto, 2018), one of the largest fields of machine learning. Utilised for the solution of games (Mnih et al., 2015; Silver et al., 2016) or the training of robots (Schulman et al., 2015; Haarnoja et al., 2018)

, RL is often modelled using the formalism of Markov decision processes (MDPs), consisting of repeated updates according to an agent’s decision making policy and the dynamics of an environment it inhabits. Due to the clear product structure of the trajectory probabilities generated by an MDP it is natural to frame such problems as TNs. Given that TNs are typically applied to problems of numerical optimisation, e.g. via DMRG, this alternative perspective could lead to novel approaches to policy optimization, or suggest useful structures of function approximations.

Here, we introduce a TN formulation of MDPs - specifically a representation of the expected return in finite MDPs (FMDPs) - and consider how a simple DMRG inspired algorithm can be applied to policy optimisation. Since MDPs are closely related to the conditioned stochastic dynamics (Majumdar & Orland, 2015), already treated with the TN formalism (Garrahan, 2016), we use this setting to illustrate the construction and the corresponding optimisation algorithm. As a specific example we consider an elementary problem of conditioned stochastic dynamics, the generation of stochastic excursions, which can be phrased as an FMDP and solved exactly using the DMRG method introduced.

The layout of the paper is as follows: In Sect. 2

, after briefly discussing FMDPs, we outline the TN formalism applied to some relevant examples from stochastic dynamics such as hidden Markov models (HMMs) and the representation of time-integrated observables. The TN formalism for the FMDP is described in Sect.

3, while in Sect. 4 we discuss how a DMRG type algorithm can be used to solve policy optimisation numerically. This is illustrated in Sect. 5 with an application to conditioned dynamics. We conclude in Sect. 6 discussing a number of possible extensions where simplifying features of the cases considered are lost.

2 Finite Markov Decision Processes and Tensor Networks for Dynamics

2.1 Finite Markov Decision Processes

In a discrete-time, episodic, MDP, individual trajectories take the form,


where and

are random variables for the state, action and reward, taking values

and respectively. The termination time, , can also be a random variable, though we consider it fixed for simplicity. At , when the episode ends, the state of the system is a terminal state. We will assume this to be unique and denote it , so that for all trajectories. We will further assume that all random variables take on values from (-independent) finite sets, and . We will call this scenario an FMDP.

In an FMDP, the probabilities dictating the dynamics are:

where and is the function defining the dynamics from . This obeys the normalisation condition,


At the beginning of the episode, the initial state distribution is given by,


At termination time, when the episode ends:


The decision component of the FMDP is contained in the probability of selecting a particular action conditioned on a given state,


where the function is known as the policy (labelled by ), and is normalised,


The return of an episode, , is defined as,


An optimal policy, , is then a policy that maximises the expected return,


Policy optimisation for a given FMDP is thus a high-dimensional constrained optimisation problem: The total number of parameters is , and each is constrained to lie on the manifold of stochastic matrices, i.e. they obey Eq. (7) and have elements in .

2.2 Tensor Networks for Hidden Markov Models

A TN is a collection of tensors contracted together in a given pattern, typically specified by a graph. An elementary example of this is a chain of matrices

applied to a vector

written in the braket notation common to physics,


where for some coefficients and the vectors associated to each state form a basis of a vector space . Since the matrices are rank- tensors and the vector a rank- tensor, this can be considered a TN consisting of tensors, where the contraction pattern is given by the usual matrix products. Performing such a contraction produces a new vector, , with components,


where are dual basis vectors such that the dot product . In this sense, we can say that this product is a TN representation (TNR) of the vector .

While the TN structure of Eq. (11) is simple and requires no clarification, more generally it is convenient to specify the contraction pattern of a TN via a graph, using a standard diagrammatic notation. In such a notation, rank- tensors are represented as shapes with legs, and contractions are indicated by joining the appropriate legs together. In this notation, Eq. (11) for reads,


As suggested by the chosen notation, Eq. (11

) is exactly the TNR for a probability distribution over states produced by a Markovian dynamics. In that case, the components of

are equal to the probabilities of state transitions,


while the components of give the initial probability distribution over states. Any vector , that is a convex combination of the particular basis , can be interpreted as a probability distribution over states via their components . This implies their normalisation as , where is the flat-vector for the basis, . The matrices, , which can be considered as elements of , obey a related condition,


which allows for the interpretation of their components as conditional probabilities.

A more complicated TNR relevant for dynamics is offered by the matrix product state (MPS) representation - also know as the tensor train decomposition (Oseledets, 2011) - of HMMs. In a system described by an HMM, the dynamics is Markovian and thus governed by or . However, information about the state, , cannot be accessed directly, and only partial information is revealed through the observables at each time step. We will denote these observables as , since they will correspond to the rewards in the FMDPs considered later.

In HMMs, the relevant probabilities are then , which are related to through marginalisation;


The marginalisation Eq. (15), can also be viewed as a decomposition of the matrices , corresponding to , into a sum of matrices , with components ,


This decomposition implies the introduction of an encoding for the possible observations, , where and , which can be achieved in the same way as for states. Taking tensor products of these vectors produces an encoding for the possible observations over time, , which form a basis of the vector space . The vectors representing the probability distributions over observations are elements of this space, , and obey the normalisation .

To build a TNR for the HMM, one can consider the set of matrices, , as a rank- tensors, which are elements of . The decomposition, Eq. (16) can then be expressed as an equation relating tensors. Graphically, representing the flat-vector as a vertical line, Eq. (16) reads,


Since the flat-vector, , causes the marginalisation of , one can remove the marginalisation by removing this vector. Thus, one finds that the probability of a making a particular set of observations in an HMM can then be expressed as,


This is exactly the structure of an MPS (Schollwöck, 2011). For , the TNR of in a HMM is thus,


2.3 Tensor Networks for Time Integrated Observables

When considering dynamics described by an HMM, one is often interested in time integrated observables. Such objects can be represented easily in terms of TNs, which in turn allows for the TNR of averages or higher-order moments.

To specialise to the case of MDPs, we will consider a HMM where we observe a reward at each discrete time-step, , and wish to represent the expected return, as a TN. To achieve this, one begins by introducing the operator, , which is diagonal in the encoding basis,


such that it can be used to produce the explicit values observed at a given time.

To encode the return of an episode, , one defines the operator, , acting on the vector space spanned by ,


where is the operator acting as on the vector space in the tensor product or as identity otherwise.

An appropriate TNR for is offered by the matrix product operator (MPO) (Perez-Garcia et al., 2007; Crosswhite & Bacon, 2008; Pirvu et al., 2010). The MPO is defined analogously to the MPS,


This has the same contraction pattern as the MPS, as is clear from the corresponding graphical equation, e.g.,


The sets of matrices, , can be chosen using standard construction methods (Schollwöck, 2011): Consider the operator-valued matrix,


When multiplied, components of such matrices are to be combined via tensor products:


Further defining the operator-valued boundary-vectors:


The return operator for the whole episode, , can be represented as product of such matrices,


In the basis , this corresponds to the previously defined MPO form (22) with matrices:


With this definition, the TNR of the expected return can be obtained directly from its expression in terms of operators and vectors,


Contracting together the constituent TNRs of , and gives the overall TN expression. For example, if this is given by,


Similarly, representations of higher-order observables such at can be constructed directly from,


as can TNRs of any other observables for which there are MPO representations of the relevant operators.

3 Tensor Network Representations for FMDPs

As with HMMs, the expected return of an FMDP can be expressed as a TN by using an MPS representation of and an MPO representation of . However, in order to perform policy optimisation using the tools developed for TNs, the dependence of on the policy, via that of , must be extracted explicitly.

Similar to when moving from Markovian dynamics to HMMs, this can be achieved by relating the relevant probabilities in the FMDP to those of the HMM, , via marginalisation;


By expressing this as a relationship between tensors, one can extend the TNR of used for HMMs, Eq. (34), to FMDPs.

To begin, we rewrite Eq. (36) as,


where are sets of matrices labelled by both a reward and an action . As for and , an encoding of into vectors is implied. Note that, because the index with value by appears twice but is not summed over (i.e. it is not part of a contraction), Eq. (37) does not yet provide a TN decomposition of Eq. (36) as desired, as can be seen clearly from the graphical notation;


Note that while for convenience we have use the same tensor label, for both the rank- and rank- tensors, these are distinct objects as indicated by the different colours.

To express the decomposition Eq. (37) in terms of tensors and their contractions alone, one must account for the fact that the information about the state at the previous time-step, , is used for conditioning twice; once in the policy and once in the dynamics. In TNs, such additional conditioning requires the inclusion of copy tensors (Biamonte et al., 2011; Glasser et al., 2018). Defined with respect to a chosen basis, , the components of a copy tensor have value one if all indices are equal, and zero otherwise. The copy tensor we will consider is the rank- copy tensor, whose components can be defined as a set of matrices,


For example,


Graphically, we denote the copy tensor as a black circle,


In general, including multiple copy tensors will allow the construction of a TNR for any joint probability distribution via decomposition using the chain rule. In such a TNR, each variable will be associated to a number of copy tensors equal to the number of times it is reused for conditioning. Thus, the TNR for a joint probability distribution decomposed via the chain-rule is in general a two-dimensional, hierarchical TN. In an FMDP, this structure is simplified considerably by the Markovian assumption, so that only a single state-copy tensor is required per time-step, and a one-dimensional TNR (an MPS) results.

With the copy tensor defined, the decomposition Eq. (37) can be expressed in terms of tensors as,

In the graphical notation this is,


The notation for this decomposition can be further simplified by grouping the copy-tensor and rank- matrix together, while also considering the indices corresponding to the state action variables, and , as a single compound index;



Written in this way, one can see that the rank- tensors appearing in the MPS representation of can be considered as the contraction of a vector, , with components , and a rank- tensor, . This is exactly the MPS representation that results from the application of an operator, , expressed as an MPO, to a vector , expressed as an MPS, i.e. . Graphically, this can be seen by decomposing every tensor in the MPS expression of , Eq. (19). For example, when , this gives,


In this expression, the vector, , is represented as a product-state MPS, i.e. an MPS with :


or, for ,


The vector contains all information about the policy during the FMDP. Information about the dynamics is instead contained in the operator , which has the MPO representation,


where for convenience we have defined , with encoding the initial probability distribution as, , and .

With the decomposition , the desired TNR of the expected return is complete. For example, when , the expected return can be written graphically as,

Figure 1: DMRG style policy optimisation: With the expected return expressed as an TN, one can perform policy optimisation by considering each tensor of in turn. At each tensor – taking here as an example – one contracts the rest of the network (the environment) which is considered fixed at this iteration. The optimal tensor is then found with respect to this environment. This is used to update the policy and thus is subsequently used to calculate a new environment for the next iteration of the algorithm. The cost of the contraction is dominated by the size of the set of states, and scales as .

4 DMRG Approach to Policy Optimisation

With an objective function expressed as a TN, an approach to optimisation that has proved effective is to optimise just one or two tensors at a time, while keeping the other tensors - the “environment” - fixed. (Note that this is distinct from what is commonly called the environment in RL, which we refer to as the system dynamics.) By passing (sweeping) through the tensors that contain the variational parameters, one can perform the optimisation iteratively. This allows for the efficient use of computational resources and has been very successful in the context of one dimension quantum many body systems, where is it known as DMRG. Building on this basic idea, a large variety of techniques have been developed to perform sophisticated, state-of-the-art optimisations.

In the case of policy optimisation using a TNR of the expected return, an approach inspired by DMRG (Schollwöck, 2011) can be used, see Fig. 1. In such an approach, each tensor of is visited in turn. At a given tensor, labelled by , the expected reward is calculated as a linear map onto this tensor by evaluating the environment - i.e. contracting together all other tensors in the network that are considered fixed at this iteration of the optimisation. In the case we are considering, the environment can be evaluated exactly, though in general approximations are required. The policy is then updated by finding the tensor that maximises the return for this fixed environment, subject to the desired constraints that the tensor be normalised appropriately, and have components .

In typical DMRG applications, a back-and-forth sweeping pattern is used to optimise the tensors. While in general many sweeps might be required to reach convergence, in the simple set up we consider a single DMRG sweep backwards in time is sufficient to find the optimal policy exactly.

To see this, consider splitting the expected return at time : The first of these terms, , can be calculated using only the probability distribution over the first rewards, . Decomposing this probability with the chain-rule forwards in time - i.e. conditioning occurs only on past states and actions - allows for the corresponding expectation value to be computed starting from the initial state distribution, , which is assumed known and policy-independent, using only the policy up to time . The second of these terms, , instead requires . Again decomposing this forwards in time the expected value of the return can be calculated. However, in this case the initial (marginal) distribution over states is . Calculating this initial distribution requires knowledge of the policy until time , and thus the policy of the whole episode is required.

Defining as a vector containing all the parameters of the policy for , the dependence on the policy of the two terms in the above decomposition implies that the optimal policy satisfies the simultaneous equations:


The optimal policy can therefore be found by first solving the second equation, thus finding , and then substituting into the first one and solving for . Since the choice of was arbitrary in the above argument, one can proceed recursively starting from . This is the usual one-site DMRG type algorithm, starting from the right-most site.

Figure 2: Trajectories for the stochastic excursion problem generated by policies optimised using DMRG: In each plot, trajectories are shown, generated by taking a random action (red or blue lines), in addition to a ”greedy” trajectory generated by taking the most probable action (thick black line) at each time-step, according to a policy optimised using DMRG. Solid blue lines satisfy the excursion conditions, while dashed red do not. In the first row policies are optimising using a forwards-in-time DMRG sweep, while in the second a backwards-in-time sweep is used. For each step in the DMRG algorithm, the constrained optimisation is achieved by parametrising the policy using real numbers, which are scaled and normalised appropriately, and applying a gradient free optimisation method (Powell’s method). The number of tensors that have been optimised to produce the policies are and for the columns respectively, as indicated by the vertical dashed lines and arrows. The expected return of the policy is also shown in each panel.

5 Example: Conditioned Dynamics for Rare Trajectory Generation

5.1 Conditioned Dynamics and FMDPs

Often when studying stochastic dynamics, the particular trajectories of interest occur only rarely (Bolhuis et al., 2002; Garrahan, 2018). In many cases, analytical study of the trajectory statistics is intractable, and we must resort to numerical sampling. An important problem is finding an alternative dynamics which generates rare trajectories efficiently (Borkar et al., 2003; Majumdar & Orland, 2015; Chetrite & Touchette, 2015; Jack, 2019). This can be phrased as an FMDP with an appropriate reward structure, such that the trajectories generated by its optimal policies satisfy the desired conditions on the original dynamics.

An elementary example of rare events are “stochastic excursions” (Majumdar & Orland, 2015), where a simple random walker is conditioned to stay above a certain line and at a given time must return to this line. For a symmetric random walker, the probability of an excursion scales as . In terms of an FMDP, the conditioned dynamics can be encoded as the solution of an optimisation problem where movement below the zero line, or failure to reach it at , is given a negative reward. Such a solution will be highly degenerate, and there are many different possible choices of reward structure that will lead to the same space of solutions.

For an episode with fixed termination time, , the positions of the random walker are encoded in , such that . The action space is , where correspond to a down/up move of the walker, respectively. We assume the initial state distribution fixed at zero, , where is the indicator function taking on the value one when the argument is true and zero otherwise. For illustration, we consider a dynamics where stochasticity is included only through the policy, though we emphasise that the TN method we apply for FDMPs has no such restriction. Indeed, one can consider not only general Markovian stochastic processes within the same framework, but also a variety of different rare events - such as meanders or bridges - by choosing the reward structure appropriately.

For the generation of excursions, the reward structure we choose is as follows: when , a reward of is given for and otherwise; at , a reward of is given for and otherwise. Thus, and the dynamics of the problem are governed by the following update rules: When :


and when :


By assumption, the system dynamics takes on the value one when the above relations are satisfied, and zero otherwise. Under a uniformly random policy - where up/down moves are equally likely regardless of the state - the dynamics will be that of an unconditioned random walker. Under an optimal policy, for which , the walker will satisfy the excursion condition.

5.2 DMRG for Excursions

To illustrate the DMRG procedure for policy optimisation, we a consider a simple DMRG algorithm to solve the problem of generating excursions. Initially, a policy is generated by randomly selecting real numbers. These are scaled and normalised appropriately to form a valid policy, before proceeding with the policy optimisation.

We perform a single sweep either forwards-in-time or backwards-in-time, thus optimising tensors or , respectively. The total optimisation consists of iterations. At a given iteration, , the environment is evaluated by contracting all tensors in the network excluding (forward sweep) or (backward sweep). A constrained optimisation is then performed to minimise the negative expected return, shifted appropriately so that the minimum is zero. We apply a simple approach of gradient free optimisation (Powell’s method), and satisfy the constraints directly by applying the necessary scaling and normalising to an input set of real values. While this method is slow, it is sufficient for this illustration, and can easily be replaced with more sophisticated ones for solving the necessary constrained optimisation.

Using the policy found by these optimisations, trajectories can be generated from the FMDP which obey the excursion condition, see Fig. 2. Trajectories sampled with four different polices are shown. In the first (second) row, the policy is determined via a forwards-in-time (backwards-in-time) sweep. How optimisation progresses is shown by the columns: in the first one, half the policy tensors have been optimised, and in the second one a full sweep has been completed. As can be seen, in the full-sweep backwards case (lower right) all trajectories generated by the policy are excursions as expected (solid blue lines); in contrast, a single full-sweep forwards can fail to find a policy that generates excursions (top right) though randomly restarting the policy optimisation allows for one to post-select a deterministic policy that generates an excursion.

Additionally, the backwards sweep discovers a policy that is stochastic (non-deterministic), while the policy found during the forwards sweep is found to be deterministic in every case: a side effect of the degeneracy of the optimal policies. Since in the backwards sweep the policy in the future of each step in the iteration is optimal, there are multiple actions which are seen to produce the same expected return, which the optimization algorithm does not uniquely focus on. In contrast, due to the random initialization each step of the forward sweep sees a distinct expected return for each action it can take, and thus the optimization algorithm focuses precisely on whichever is currently seen as best according to the incorrect future policy.

6 Conclusions

We have introduced a tensor network formulation for Markov decision processes, along with an policy optimisation algorithm based on those usually applied to matrix product states. TNs and the associated optimisation algorithms are extremely flexible and can certainly be adapted accommodate more sophisticated cases beyond the class of MDPs considered here. Possible generalisations include: (i) termination time that can vary between episodes; (ii) continuing MDPs using uniform MPS/transfer matrix methods (Vanderstraeten et al., 2019); (iii) non-Markovian system dynamics or non-local in time reward structure, optimising for a non-Markovian policy; (iv) integration of TNs into standard RL algorithms, such as model-based approaches for unknown system dynamics, see e.g. (Wang et al., 2019), or using the a TNR as a natural model of the value function. As such, the formalism we present here lays the ground to pursue a number of avenues of research combining tensor networks with reinforcement learning more broadly.


We thank N. Pancotti for useful discussion and reading of the manuscript. The research leading to these results has received funding from the Leverhulme Trust [grant number RPG-2018-181] and University of Nottingham grant no. FiF1/3. We are grateful for access to the University of Nottingham’s Augusta HPC service. We acknowledge the use of Athena at HPC Midlands+.