Kernel Taylor-Based Value Function Approximation for Continuous-State Markov Decision Processes

06/03/2020 ∙ by Junhong Xu, et al. ∙ Indiana University Expedia Group 9

We propose a principled kernel-based policy iteration algorithm to solve the continuous-state Markov Decision Processes (MDPs). In contrast to most decision-theoretic planning frameworks, which assume fully known state transition models, we design a method that eliminates such a strong assumption, which is oftentimes extremely difficult to engineer in reality. To achieve this, we first apply the second-order Taylor expansion of the value function. The Bellman optimality equation is then approximated by a partial differential equation, which only relies on the first and second moments of the transition model. By combining the kernel representation of value function, we then design an efficient policy iteration algorithm whose policy evaluation step can be represented as a linear system of equations characterized by a finite set of supporting states. We have validated the proposed method through extensive simulations in both simplified and realistic planning scenarios, and the experiments show that our proposed approach leads to a much superior performance over several baseline methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Decision-making of an autonomous mobile robot moving in unstructured environments typically requires the robot to account for uncertain action (motion) outcomes, and at the same time, maximize the long-term return. The Markov Decision Process (MDP) is an extremely useful framework for formulating such decision-theoretic planning problems [7]. Since the robot is moving in a continuous space, directly employing the standard form of MDP needs a discretized representation of the robot state and action. For example, in practice the discretized robot states are associated with spatial tessellation [37], and a grid-map like representation has been widely used for robot planning problems where each grid is regarded as a discrete state; similarly, actions are simplified as transitions to traversable grids which are usually the very few number of adjacent grids in vicinity.

However, the discretization can be problematic. Specifically, if the discretization is low in resolution (i.e., large but few number of grids), the decision policy becomes a very rough approximation of the simplified (discretized) version of the original problem; on the other hand, if the discretization is high in resolution, the result might be approximated well, but this will induce prohibitive computational cost and prevent the real-time decision-making. Finally, the characteristics of state space might be complex and it is inappropriate to conduct lattice-like tessellation which is likely to result in sub-optimal solutions. See Fig. 1 for an illustration.

Fig. 1: In unstructured environments, the robot needs to make motion decisions in the navigable space with spatially varying terrestrial characteristics (hills, ridges, valleys, slopes). This is different from the simplified and structured environments where there are only two types of representations, i.e., either obstacle-occupied or obstacle-free. Evenly tessellating the complex terrain to create a discretized state space cannot effectively characterize the underlying value function used for computing the MDP solution. (Picture credit: NASA)

Another critical issue lies in MDP’s transition model which describes the probabilistic transitions from a state to others. However, obtaining an accurate stochastic transition model for robot motion transition is unrealistic even without considering spatiotemporal variability. This is another important factor that significantly limits the applicability of MDP in many real-world problems. Reinforcement learning 

[19]

does not rely on a transition model specification, but requires cumbersome training trials to learn the value function and the policy, which can be viewed as another strong assumption in many robotic missions. Thus, it is desirable that the demanding assumption of a known transition model can be relaxed. Fortunately, the characteristics of transition probabilistic distribution (e.g., mean, variance, or quantiles) for most robotic planning and control systems can be obtained from historical data or offline tests 

[37]. If we only assume such “partial knowledge”–mean and variance–of the transition model, we must re-design the modeling and solving mechanisms which will be presented in this work.

To address the above problems, we propose a kernel Taylor-based approximation approach. Our contributions can be summarized as follows:

  • First, to relax the requirement of fully known transition functions, we apply the second-order Taylor expansion to the value function [8, 9]

    . The Bellman-type policy evaluation equation and Bellman optimality equation are then approximated by a partial differential equation (PDE) which only relies on the first and second moments of transition probability distribution.

  • Second, to improve the generalizability of the value function, we use kernel functions which can represent a large number of function families for better value approximation. This approximation can conveniently characterize the underlying value functions with a finite set of discrete supporting states.

  • Finally, we develop an efficient policy iteration algorithm by integrating the kernel value function representation and the Taylor-based approximation to Bellman optimality equation. The policy evaluation step can be represented as a linear system of equations with characterizing values at the finite supporting states, and the only information needed is the first and second moments of the transition function. This alleviates the need for heavily searching in continuous/large state space and the need for carefully modeling/engineering the transition probability.

Ii Related Work

Our work primarily focuses on value function approximation in large/continuous state space using only minimum prior knowledge of the transition function. A major challenge for solving the continuous-state MDP involves the search in a large-scale (usually infinite) state space. Popular methods in robotics that avoid intractability of computing the value function over the continuous space are by tessellating the continuous state space into grids [30, 29, 12, 1, 3]. However, this naive approach does not scale well and may give inferior performance when the problem size increases, known as the curse of dimensionaliy [4]. A more advanced discretization technique that alleviates this problem is by adaptive discretization [15, 23, 38, 27, 22].

Alternative methods tackle this challenge by representing and also approximating the value function by a set of basis functions or some parametric functions [5, 35, 28]. The parameters can be optimized by minimizing the Bellman residual [2]. However, these methods are not applicable in complicated problems because defining features to approximate the value function linearly is non-trivial. This weakness may be resolved through kernel methods [17]. Because the weights in linear combination of basis functions can be presented by their product (through so-called duel form of least squares [33]), the weights can be written in terms of kernel functions and value functions at supporting states. Once value functions at supporting states are obtained, the approximation to value function at any state is also determined. The approach to approximating the value function by kernel functions is referred as the direct kernel-based method

in this paper. In addition, it is generally hard to find a suitable nonlinear function (such as neural networks) to approximate the value function  

[13]. There is a vast literature on kernelized value function approximations in reinforcement learning [10, 20, 36, 39], but few studies in robotic planning problems leveraged this approach. A recent application of work [10] on marine robots can be found in [24].

Unfortunately, all these schemes rely on either fully known transitions in MDP or the selection of basis functions, which are difficult to obtain in practice. Therefore, the challenge becomes how to design a principled methodology without explicitly relying on basis functions and without full knowledge of transitions in MDP, which will be addressed in this work.

Iii Preliminary Material

Iii-a Markov Decision Processes

We formulate the robot decision-theoretic planning problem as an infinite horizon discounted Markov Decision Process (MDP) with continuous states and finite actions. An infinite horizon discounted MDP is defined by a 5-tuple , where is the -dimensional continuous state space and is a finite set of actions. can be thought of as the robot workspace in our study. A robot transits from a state to the next by taking an action in a stochastic environment and obtains a reward

. Such transition is governed by a conditional probability distribution

which is also termed as transition model (or transition function); the reward , a mapping from a pair of state and action to a scalar value, specifies the short-term objective that the robot receives by taking action at state . The final element in is a discount factor which will be used in the expression of value function.

We consider the class of deterministic policies , which defines a mapping from a state to an action. The expected discounted cumulative reward for any policy starting at any state is expressed as

(1)

We can rewrite the above equation recursively as follows

(2)

where is called the Bellman operator, and . The function in Eq. (2) is usually called the state value function of the policy . Solving an MDP is to find the optimal policy with the optimal value function which satisfies the Bellman optimality equation

(3)

Iii-B Approximate Policy Iteration via Value Function Representation

To solve an MDP, value iteration and policy iteration are the most prevalent approaches. It has been shown that the value iteration and policy iteration can achieve similar state-of-the-art performances in terms of solution quality and running time [6, 35]. Our work will be built upon policy iteration and here we provide a summary of the important value function approximation process used in the policy iteration [31, 14].

Policy iteration requires initialization of the policy (can be random), based on which a system of linear equations can be established where each equation is exactly the value function (Eq. (2)). When the states in the MDP are finite, the solution to this linear system yields incumbent values for all states [32]. This step is called policy evaluation. The second step is to improve the current policy by greedily improving local actions based on the incumbent values obtained. This step is called policy improvement. Through iterating these two steps, we can find the optimal policy and a unique solution to the value function that satisfies Eq. (1) for every state.

If, however, the states are continuous or the number of states is infinite, it is difficult to evaluate the value function at every state. One must resort to approximate solutions. Suppose that the value function can be represented by a weighted linear combination of known functions where only weights are to be determined, then a natural way to go is leveraging the Bellman-type equation, i.e., Eq. (2), to compute the weights. Specifically, given an arbitrary policy, the representation of value function can be evaluated at a finite number of states, leading to a linear system of equations whose solutions can be viewed as weights [21]. This obtained representation of value function can be used to improve the current policy. The remaining procedure is then similar to the standard policy iteration method. The final obtained value function representation serves as an approximated optimal value function for the whole continuous state space, and the corresponding policy can be obtained accordingly.

Formally, let the value function approximation under policy be

(4)

where . The set is the basis functions in literature [31]. A finite number of supporting states , can be selected, which are minimized via the squared Bellman error over , defined by The solution for may have a closed form in terms of the basis functions, transition probabilities, and rewards [21]. By policy iteration, the final solution for can be obtained.

Note that may be generalized to any parametric nonlinear functions such as neural networks, and that the selection of supporting states needs to take account of the characteristics of the underlying value functions (in our robotics decision-theoretic planning scenarios, it relates to the landscape geometry of the terrain).

Iv Kernel Taylor-Based Approximate Policy Iteration

Our objective is to design a principled kernel-based policy iteration approach by leveraging kernel methods to solve the continuous-state MDP. In contrast to most decision-theoretic planning frameworks which assume fully known MDP transition probabilities [7, 32], we propose a method that eliminates such a strong premise which oftentimes is extremely difficult to engineer in practice. To overcome this challenge, first we apply the second-order Taylor expansion of the kernelized value function (Section IV-A). The Bellman optimality equation is then approximated by a partial differential equation which only relies on the first and second moments of transition probabilities (Section IV-B). Combining the kernel representation of value function, this approach efficiently tackles the continuous or large-scale state space search with minimum prerequisite knowledge of state transition model (Sections IV-C and  IV-D). Finally, the experiments show that our proposed approach is very powerful and flexible, and reveal great advantages over several baseline approaches (Section V).

Iv-a Taylored Approximate Policy Evaluation Equation

To design an efficient approach for solving MDP based decision-theoretic planning problems, we essentially have two elements to deal with: the value function and the Bellman optimality equation. If we directly apply kernel methods to approximate the value function (referred to as the direct kernel-based method), we can avoid explicitly specifying basis functions as mentioned in Section III-B. But it still requires fully known MDP transition probabilities, and it needs the exact Bellman optimality equations to develop the policy iteration method.

In contrast to the direct kernel-based approach, we consider an approximation to the Bellman-type equation by using only first and second moments of transition functions. This will allow us to obtain a nice property that a complete and accurate transition model is not necessary; instead, only the important statistics such as mean and variance (or covariance) will be sufficient. To better describe the basic idea, we keep our discussions on a surface-like terrain and use that surface as the decision-theoretic planning workspace, i.e., , though our approaches apply to the state space of any dimensions.

Formally, suppose that the value function for any given policy has continuous first and second order derivatives. We subtract both hand-sides by from Eq. (2) and then take Taylor expansions of value function around up to second order [8]:

(5)

where and

are the first moment (i.e., mean, a 2-dimensional vector) and the second moment (i.e., covariance, a 2-by-2 matrix) of transition functions, respectively, with the following form

(6a)
(6b)

for ; the operator and the notation in the last equation indicate an inner product. To be clear, we present the expression for the following operator

Since Eq. (5) approximates calculation of Eq. (2) in the policy evaluation stage, the solution to Eq. (5) thus provides the value function approximation under current policy . Eq. (5) also implies that we only need to use the mean and convariance instead of the original transition model to approximate the value function.

Iv-B Approximate Bellman Optimality Equation via PDE

We need to analyze the necessary boundary conditions to Eq. (5) which is a partial differential equation (PDE), and develop an approximation methodology to the Bellman optimality equation which is the foundation for efficient MDP solution. To achieve these, first, the directional derivative of the value function with respect to the unit vector normal at the boundary states must be zero. (Note, the value function should not have values on obstacles or outside the state space .) Second, in order to ensure a unique solution, we constrain the value function at the goal state to a fixed value.

Let us denote the boundary of entire continuous planning region/workspace by and the goal state by . Suppose the value function at is . Section IV-A implies that the Bellman optimality equation Eq. (3) can be approximated by the following PDE:

(7)

with boundary conditions

(8a)
(8b)

where denotes the unit vector normal to pointing outward. The condition (8a) is a type of homogeneous Neumann condition, and condition (8b) can be thought of as a Dirichlet condition in literature [11]. This elegantly approximates the classic Bellman optimality equation by a convenient PDE representation. In the next section, we will leverage the kernelized representation of the value function to avoid difficulties of directly solving PDE. The kernel method will help transform the problem to a linear system of equations with unknown values at the finite supporting states.

Iv-C Kernel Taylor-Based Approximate Policy Evaluation

With aforementioned formulations, another critical research question is whether the value function can be represented by some special functions that are able to approximate large function families in a convenient way. We tackle this question by using a kernel method to represent the value function. Thanks to Eq. (7) which allows us to extend with kernelized policy evaluation for Taylored value function approximation.

Specifically, let be a generic kernel function [17]. For a set of selected finite supporting states , let be the Gram matrix with , and . Given a policy , assume the value functions at are . Then, for any state , the kernelized value function has the following form

(9)

where is a regularization factor. When

, it links to the kernel ordinary least squares estimation of

in Eq. (4); when , it refers to the ridge-type regularized kernel least squares estimation [33]. Furthermore, Eq. (9) implies that as long as the values are available, the value function for any state can be immediately obtained. Now our objective is to get through Eq. (5) and boundary conditions Eq. (8).

footnotetext:

It is worth mentioning that our approach of utilizing kernel methods is to approximate the function. This usage should be distinguished from that in the machine learning literature where kernel methods are used to learn patterns from data.

Plugging the kernelized value function representation into Eq. (5), we end up with the following linear system:

(10)

where

is an identity matrix,

is a vector with element , and is a matrix whose elements are:

(11)

Note that indicates the derivatives with respect to , i.e., . In Appendix, we provide a concrete example using Gaussian kernels which lead to closed-form expressions and are widely utilized in practice.

The solutions to the system Eq. (10) yield values of . These values further allow us to obtain the value function (9) for any state under current policy . This completes modeling our kernel Taylor-based approximate policy evaluation framework.

Iv-D Kernel Taylor-Based Approximate Policy Iteration

1: A set of supporting states ; the kernel function ; the regularization factor ; the MDP .
2:The kernelized value function Eq. (9) for every state and corresponding policy.
3:Initialize the action at the supporting states.
4:Compute the matrix and its inverse.
5:repeat
6:     // Policy evaluation step
7:     Solve for according to Eq. (10) in Section IV-A.
8:     // Policy improvement step
9:     for  do
10:         Update the action at the supporting state based on Eq. (12).
11:     end for
12:until actions at the supporting states do not change.
Algorithm 1 Kernel Taylor-Based Approximate Policy Iteration
(a) Kernel Taylor-based PI
(b) Direct kernel-based PI
(c) NN
(d) Grid-based PI
Fig. 2: Evaluation with a traditional simplified scenario where obstacles and goal are depicted as red and green blocks, respectively. We compare the final value function and the final policy obtained from (a) kernel Taylor-based PI, (b) direct kernel-based PI, (c) NN, and (d) grid-based PI. A brighter background color represents a higher state value. The policies are the arrows (vector fields), and each arrow points to some next waypoint. Orange dots denote supporting states or the grid centers (in the case of grid-based PI).

With the above new model, our next step is to design an implementable algorithm that can solve the continuous-state MDP efficiently. We extend the classic policy iteration mechanism which iterates between the policy evaluation step and the policy improvement step until convergence to find the optimal policy as well as its corresponding optimal value function.

Because our kernelized value function representation depends on the finite supporting states instead of the whole state space, we only need to improve the policy on . Therefore, the policy improvement step in the -th iteration is to produce a new policy according to

(12)

where , and are the current policy and the updated policy, respectively. Note that and depend on through the transition function in Eq. (6). Compared with the approximated Bellman optimality equation (Eq. (7)), Eq. (12) drops the term . This is because does not explicitly depend on action . The value function of the updated policy satisfies  [6]. If the equality holds, the iteration converges.

The final kernel Taylor-based policy iteration algorithm is pseudo-coded in Alg. 1. It first initializes the actions at the finite supporting states and then iterates between policy evaluation and policy improvement. Since the supporting states as well as the kernel parameters do not change, the regularized kernel matrix and its inverse are computed only once at the beginning of the algorithm. This greatly reduces the computational burden caused by matrix inversion. Furthermore, due to the finiteness of the supporting states, the entire algorithm views the policy as a table and only updates the actions at the supporting states using Eq. (12). The algorithm stops and returns the supporting state values when the actions are stabilized. We can then use these state-values to get the final kernel value function that approximates the optimal solution. The corresponding policy for every continuous state can then be easily obtained from this kernel value function [34].

Intuitively, this proposed framework is flexible and powerful due to the following reasons: rather than tackling the difficulties in solving the PDE which approximates the original Bellman-type equation, we use the kernel representation to convert the problem to a system of linear equations with characterizing values at the finite discrete supporting states. From this viewpoint, our proposed method nicely balances the trade-off between searching in finite states and that in infinite states. In other words, our approach leverages the kernel methods and Bellman optimal conditions under the practical assumptions.

V Experiments

To validate our method, we consider two mobile robot decision-theoretic planning tasks. The first one is a goal-oriented planning problem in a simple environment with obstacle-occupied and obstacle-free spaces. This will help us to evaluate basic algorithmic properties. In the second task, we demonstrate that our method can be applied to a more realistic as well as more challenging navigation scenario on Mars surface [25], where the robot needs to take the elevation of the terrain surface into account (i.e., “obstacles” are implicit). In both tasks, we assume that the estimates of the first two moments of the transition probability are obtained from the past field experiments. To be concrete, in both experiments we use the Gaussian kernel given in Appendix -A.

(a)
(b)
(c)
(d)
Fig. 3:

The performance matrix obtained by the hyperparameter search using (a)

; (b) ; (c) ; and (d) evenly-spaced supporting states. Rows and columns represent different Gaussian kernel lengthscale and regularization parameters, respectively. The numbers in the heatmap represent the average return of the final policy obtained using the corresponding hyperparameter combination. The colorbar is shown on the right side of each table.

V-a Plane Navigation

V-A1 Setup

Our first experiment is a 2D plane navigation problem, where the obstacles and a goal area are represented in a environment, as shown in Fig. 2. The state space for this task is a 2-dimensional euclidean space, i.e., and . The action space is a finite set of points . Each point is an action generated on a circle centered at the current state with a radius . In this experiment, we set the number of actions and the action radius as . An action point can be viewed as the “carrot-dangling” waypoint for the robot to follow, which serves as the input to the low-level motion controller. For the reward function, we set the reward of arriving at the goal and obstacle states to be and , respectively. Since the reward now depends on the next state, we use Monte-carlo sampling to estimate the expectation of . The discount factor for the reward is set to . We set the obstacle areas and the goal as absorbing states, i.e., the robot can not transit to any other states if they are in these states. To satisfy the boundary condition mentioned in Section IV-B, we allow the robot to receive rewards at the goal state, but it cannot receive any reward if its current state is within an obstacle. Thus, the goal state value is .

V-A2 Performance measure

Since the ultimate goal of planning is to find the optimal policy, our performance measure is based on the quality of the policy. A policy is better if it achieves a higher expected cumulative reward starting from every state. Because it is impossible to evaluate over the infinite number of states, we numerically evaluate the quality of a policy using the average return criterion [18]. In detail, we first uniformly sample states to ensure a thorough performance evaluation. Then, for each sampled state, we execute the policy to generate multiple trajectories, where each trajectory ends when it arrives at a terminal state (goal or obstacle) or reaches an allowable maximal number of steps. This procedure gives us an expected performance of the policy at any state by averaging the discounted sum of rewards over all the trajectories starting from it. Now, we can calculate the average return criterion by averaging over the performance of sampled states. A higher value of the average return implies that, on average, the policy gives better performance over the entire state space.

V-A3 Results

Fig. 4: The comparison of the average return of the policies computed from the four algorithms. The x-axis is the number of supporting states/grids used in computing the policy. The y-axis shows the average return.
(a)
(b)
(c)
(d)
Fig. 5: Supporting state distribution and the policy for evenly-spaced selection and importance sampling-based selection. The 3D surface shows the Mars digital terrain model obtained from HiRISE. Supporting states and the policies are shown in black dots and vector fields, respectively. The colored lines represent the sampled trajectories, which initiate from four different starting positions. (a)(b) The evenly-spaced supporting states and the corresponding policy and trajectories; (c)(d) The supporting states generated by importance sampling and the corresponding policy and trajectories.

To evaluate the effect of supporting states, we place evenly-spaced supporting states (in a lattice pattern) with different spacing resolution. Besides the number of states, the kernel lengthscale (see Appendix -A) and the regularization parameter are the other two hyperparameters governing the performance of our algorithm. We present the grid-based hyperparameter search results using four different configurations of supporting states shown in Fig. 3. The lengthscale and regularization parameters are searched over the same values, . By entry-wise comparison among the four matrices in Fig. 3, we can observe that increasing the number of states leads to improving performance in general. However, we can find that the best performed policy is given by the supporting states configuration (Fig. LABEL:sub@fig:taylor-pi-100) which is not the scenario with the best spacing resolution. This indicates that a larger number of states can also result in a deteriorating solution, and the performance of the algorithm is a matter of how the supporting states are placed (distributed), instead of the number (resolution) of state discretization. Furthermore, we can gain some insights on how to select the hyperparameters based on the number of supporting states. Low-performing entries (highlighted with red) occur more often on the left side of the performance matrix when the number of supporting states increases. It implies that with more supporting states, the algorithm requires a stronger regularization (i.e., greater described in Section IV-C). On the other hand, high-performing policies (indicated by yellow) appear more on the bottom of the performance matrix when a greater number of supporting states present, which means that a smaller length scale is generally required given a larger quantity of supporting states.

We further compare our kernelized value function representation against other three variants of the value function approximations. Specifically, the first one is the direct kernel-based approximation method using a Gaussian kernel. This method is similar to the one in [20], but with a fully known transition function. The second one uses neural networks (NNs) as the value function approximator. We setup the NN configuration similar to a recent work [16]. In detail, we use a shallow two-layer network with 100 hidden units in each layer. Its parameters are optimized through minimizing the squared Bellman error via gradient descent. Since there is no closed-form solution to compute the expected next state value when using NN as a function approximator, we use Monte-Carlo sampling to estimate the expected value at the next states . The third method is the grid-based approximation method. It first transforms the continuous MDP to a discrete version, where each state is regarded as a grid. Then, it uses the vanilla policy iteration to solve the discretized MDP.

The comparison among above methods aims at investigating two important questions:

  1. How does the kernelized value function representation compare to other representations (NN and grid-based) in terms of the final policy performance?

  2. Compared to our method, the direct kernel-based method not only requires the fully known transition function, but also restricts the transition to be a Gaussian distribution. Can our method with only mean and variance obtain similar performance as the direct kernel-based method?

To answer these questions, we choose the transition function as a Gaussian distribution. Its mean is the selected next waypoint. We set the standard deviation of the transition function to be

on both axes during the experiment. The transition probability models the accuracy of the low-level motion controller: more accurate controller leads to smaller uncertainty. To perform fair comparisons, we use the same supporting states and apply hyperparameter search to all the methods.

The results for four methods are shown in Fig. 4 in terms of the average return. The first question is answered by the fact that the kernel-based methods (kernel Taylor-based and direct kernel-based PI) consistently outperforms the other two methods. Moreover, our method has the performance as good as the direct kernel-based method which however requires the prerequisite full distribution information of the transition. This indicates that our method can be applied to broader applications that do not have full knowledge of transition functions. In contrast to the grid-based PI, the kernel-based algorithms and NN can achieve moderate performance even with a small number of supporting states. It indicates that the continuous representation of the value function is crucial when supporting states are sparse. However, increasing the number of states does not improve the performance of the NN.

In Fig. 6, we compare the computational time and the number of iterations to convergence. The computational time of our method is less than the grid-based method as revealed in Fig. LABEL:sub@fig:computation-time. We notice that there is a negligible computational time difference between our method and the direct method. As a parametric method, NN has the least computational time, and only linearly increases, but it does not converge as indicated by Fig. LABEL:sub@fig:convergence-iter.

The function values and the final policies are visualized in Fig. 2. All the methods except for the NN obtain reasonable approximations to the optimal value function. Compared to our method, the values generated by the grid-based method are discrete “color blocks”, thus the obtained policy is non-smooth. The direct kernel-based method obtains a slightly more “aggressive” (dangerous) policy in contrast to our method.

(a)
(b)
Fig. 6: Computational time comparisons of the four algorithms with changing number of states. (a) The computational time per iteration. (b) Number of iterations to convergence.

V-B Martian Terrain Navigation

In this experiment, we consider the autonomous navigation task on the surface of Mars with a rover. We obtain the Mars terrain data from High Resolution Imaging Science Experiment (HiRISE) [26]. Since there is no explicitly presented “obstacle”, the robot only receives the reward when it reaches the goal. If the rover attempts to move on a steep slope, it may be damaged and trapped within the same state with probability proportional to the slope angle. Otherwise, its next state is distributed around the desired waypoint specified by the current action. This indicates that the underlying transition function should be the mixture of these two situations. It is reasonable to assume that the means of the two cases are given by the current state and the next waypoint, respectively. We can similarly have an estimate of the variances. The mean and the variance of the transition function can then be computed using the law of total expectation and total variance, respectively.

Due to the complex and unstructured terrestrial features, evenly-spaced supporting state points may fail to best characterize the underlying value function. Also, to keep the computational time at a reasonable amount while maintaining a good performance, we leverage the importance sampling technique to sample the supporting states that concentrate around the dangerous regions where there are steep slopes. This is obtained by first drawing a large number of states uniformly covering the whole workspace. For each sampled state, we then assign a weight proportional to its slope angle. Finally, we resample supporting states based on the weights. To guarantee the goal state to have a value, we always place one supporting state at the center of the goal area.

Fig. LABEL:sub@fig:even-support-state and Fig. LABEL:sub@fig:weighted-support-state compare the two methods for supporting state selections. The supporting states given by the importance sampling-based method are dense around the slopes. These supporting states better characterize the potentially high-cost and dangerous areas than the evenly-spaced selection scheme. We selected four starting locations from where the rover needs to plan paths to arrive at a goal location. For each starting location, we conducted multiple trials following the produced optimal policies. The trajectories generated with the importance sampling states in Fig. LABEL:sub@fig:weighted-3d-policy attempt to approach the goal (green star) with minimum distances, and at the same time, avoid high-elevation terrains. In contrast, the trajectories obtained using the evenly-spacing states in Fig. LABEL:sub@fig:even-3d-policy approach the goal in a more aggressive manner which can be risky in terms of safety. It indicates that a good selection of supporting states can better capture the state value function and thus produce finer solutions. This superior performance can also be reflected in Fig. LABEL:sub@fig:bar-chart. The policy obtained by the uniformly sampled states shows similar performance to the one generated by the evenly-spacing states, both of which yield smaller average return than the importance-sampled case. A top-down view of the policy is shown in Fig. LABEL:sub@fig:terrain-policy-map-weighted-2d where the background colormap denotes the elevation of terrain.

(a)
(b)
Fig. 7:

(a) The top-down view of the Mars terrain surface as well as the policy generated by our method with the importance sampling selection. The colormap indicates the height (in meters) of the terrain. (b) The comparison of average return among three supporting state selection methods using the same number of states. Red, green, and blue bars indicate the performance of importance sampling selection, evenly-spaced selection, and uniform distribution sampling selection, respectively.

Vi Conclusion

This paper presents an efficient policy iteration algorithm to solve the continuous-state Markov Decision Process by integrating the kernel value function representation and the Taylor-based approximation to Bellman optimality equation. Our algorithm alleviates the need for heavily searching in continuous state space and the need for precisely modeling the state transition functions. We have thoroughly evaluated the proposed method through simulations in both simplified and realistic planning scenarios. The experiments comparing with other baseline approaches show that our proposed framework is powerful and flexible, and the performance statistics reveal superior efficiency and accuracy of our algorithms.

-a Gaussian Kernel for kernel Taylor-Based Approximate Policy Evaluation

Because Gaussian kernels are widely used in the studies of kernel methods, this section presents the necessary derivations to aid the application of Gaussian kernels to our proposed kernel Taylor-Based approximate methods.

Gaussian kernel functions on states and have the form , where is a constant and is a covariance matrix. Note that is referred to as the length-scale parameter in our paper. Due to limited space, we only provide formula below for the first and second derivatives of the Gaussian kernel functions. These formula are necessary when Gaussian kernels are employed (for example, Eq.(11)). In fact, we have

(13)

and

(14)

where denotes the trace of the matrix.

References

  • [1] W. H. Al-Sabban, L. F. Gonzalez, and R. N. Smith (2013) Wind-energy based path planning for unmanned aerial vehicles using markov decision processes. In 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 784–789. Cited by: §II.
  • [2] A. Antos, C. Szepesvári, and R. Munos (2008) Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning 71 (1), pp. 89–129. Cited by: §II.
  • [3] S. S. Baek, H. Kwon, J. A. Yoder, and D. Pack (2013) Optimal path planning of a target-following fixed-wing uav using sequential decision processes. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2955–2962. Cited by: §II.
  • [4] R. E. Bellman (2015) Adaptive control processes: a guided tour. Vol. 2045, Princeton university press. Cited by: §II.
  • [5] D. P. Bertsekas and J. N. Tsitsiklis (1996) Neuro-dynamic programming. Vol. 5, Athena Scientific Belmont, MA. Cited by: §II.
  • [6] D. P. Bertsekas (1995) Dynamic programming and optimal control. Vol. 1, Athena scientific Belmont, MA. Cited by: §III-B, §IV-D.
  • [7] C. Boutilier, T. Dean, and S. Hanks (1999) Decision-theoretic planning: structural assumptions and computational leverage.

    Journal of Artificial Intelligence Research

    11, pp. 1–94.
    Cited by: §I, §IV.
  • [8] A. Braverman, I. Gurvich, and J. Huang (2018) On the taylor expansion of value functions. arXiv preprint arXiv:1804.05011. Cited by: 1st item, §IV-A.
  • [9] J. Buchli, F. Farshidian, A. Winkler, T. Sandy, and M. Giftthaler (2017) Hamilton-Jacobi-Bellman Equation. In Optimal and Learning Control for Autonomous Robots, Cited by: 1st item.
  • [10] Y. Engel, S. Mannor, and R. Meir (2003) Bayes meets bellman: the gaussian process approach to temporal difference learning. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 154–161. Cited by: §II.
  • [11] L. C. Evans (2010) Partial Differential Equations: Second Edition (Graduate Series in Mathematics). American Mathematical Society. Cited by: §IV-B.
  • [12] Y. Fu, X. Yu, and Y. Zhang (2015) Sense and collision avoidance of unmanned aerial vehicles using markov decision process and flatness approach. In 2015 IEEE International Conference on Information and Automation, pp. 714–719. Cited by: §II.
  • [13] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. Cited by: §II.
  • [14] G. J. Gordon (1999) Approximate solutions to markov decision processes. Technical report Carnegie-Mellon University School of Computer Science. Cited by: §III-B.
  • [15] A. A. Gorodetsky, S. Karaman, and Y. M. Marzouk (2015)

    Efficient high-dimensional stochastic optimal motion control using tensor-train decomposition.

    .
    In Robotics: Science and Systems, Cited by: §II.
  • [16] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa (2015) Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944–2952. Cited by: §V-A3.
  • [17] T. Hofmann, B. Schölkopf, and A. J. Smola (2008) Kernel methods in machine learning. The annals of statistics, pp. 1171–1220. Cited by: §II, §IV-C.
  • [18] R. Islam, P. Henderson, M. Gomrokchi, and D. Precup (2017) Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. arXiv preprint arXiv:1708.04133. Cited by: §V-A2.
  • [19] J. Kober, J. A. Bagnell, and J. Peters (2013) Reinforcement learning in robotics: a survey. The International Journal of Robotics Research 32 (11), pp. 1238–1274. Cited by: §I.
  • [20] M. Kuss and C. E. Rasmussen (2004) Gaussian processes in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 751–758. Cited by: §II, §V-A3.
  • [21] M. G. Lagoudakis and R. Parr (2003) Least-squares policy iteration. Journal of machine learning research 4 (Dec), pp. 1107–1149. Cited by: §III-B, §III-B.
  • [22] S. M. LaValle (2006) Planning algorithms. Cambridge university press. Cited by: §II.
  • [23] L. Liu and G. S. Sukhatme (2018) A solution to time-varying markov decision processes. IEEE Robotics and Automation Letters 3 (3), pp. 1631–1638. Cited by: §II.
  • [24] J. Martin, J. Wang, and B. Englot (2018) Sparse gaussian process temporal difference learning for marine robot navigation. arXiv preprint arXiv:1810.01217. Cited by: §II.
  • [25] M. Maurette (2003) Mars rover autonomous navigation. Autonomous Robots 14 (2-3), pp. 199–208. Cited by: §V.
  • [26] A. S. McEwen, E. M. Eliason, J. W. Bergstrom, N. T. Bridges, C. J. Hansen, W. A. Delamere, J. A. Grant, V. C. Gulick, K. E. Herkenhoff, L. Keszthelyi, et al. (2007) Mars reconnaissance orbiter’s high resolution imaging science experiment (hirise). Journal of Geophysical Research: Planets 112 (E5). Cited by: §V-B.
  • [27] R. Munos and A. Moore (2002) Variable resolution discretization in optimal control. Machine learning 49 (2-3), pp. 291–323. Cited by: §II.
  • [28] R. Munos and C. Szepesvári (2008) Finite-time bounds for fitted value iteration. Journal of Machine Learning Research 9 (May), pp. 815–857. Cited by: §II.
  • [29] M. Otte, W. Silva, and E. Frew (2016) Any-time path-planning: time-varying wind field+ moving obstacles. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 2575–2582. Cited by: §II.
  • [30] A. A. Pereira, J. Binney, G. A. Hollinger, and G. S. Sukhatme (2013) Risk-aware path planning for autonomous underwater vehicles using predictive ocean models. Journal of Field Robotics 30 (5), pp. 741–762. Cited by: §II.
  • [31] W. B. Powell (2016) Perspectives of approximate dynamic programming. Annals of Operations Research 241 (1-2), pp. 319–356. Cited by: §III-B, §III-B.
  • [32] M. L. Puterman (2014) Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Cited by: §III-B, §IV.
  • [33] J. Shawe-Taylor, N. Cristianini, et al. (2004) Kernel methods for pattern analysis. Cambridge university press. Cited by: §II, §IV-C.
  • [34] J. Si, A. G. Barto, W. B. Powell, and D. Wunsch (2004) Handbook of learning and approximate dynamic programming. Vol. 2, John Wiley & Sons. Cited by: §IV-D.
  • [35] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §II, §III-B.
  • [36] G. Taylor and R. Parr (2009) Kernelized value function approximation for reinforcement learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1017–1024. Cited by: §II.
  • [37] S. Thrun, W. Burgard, and D. Fox (2000) Probabilistic robotics. Vol. 1, MIT press Cambridge. Cited by: §I, §I.
  • [38] J. Xu, K. Yin, and L. Liu (2019-06) Reachable space characterization of markov decision processes with time variability. In Proceedings of Robotics: Science and Systems, FreiburgimBreisgau, Germany. External Links: Document Cited by: §II.
  • [39] X. Xu, D. Hu, and X. Lu (2007) Kernel-based least squares policy iteration for reinforcement learning. IEEE Transactions on Neural Networks 18 (4), pp. 973–992. Cited by: §II.