High-dimensional Motion Planning using Latent Variable Models via Approximate Inference

11/22/2017 ∙ by Jung-Su Ha, et al. ∙ KAIST 수리과학과 0

In this work, we present an efficient framework to generate a motion trajectory of a robot that has a high degree of freedom (e.g., a humanoid robot). High-dimensionality of the robot configuration space often leads to difficulties in utilizing the widely-used motion planning algorithms because the volume of the decision space increases exponentially with the number of dimensions. To handle complications arising from the large decision space, and to solve a corresponding motion planning problem efficiently, two key concepts were adopted. First, the Gaussian process latent variable model (GP-LVM) was utilized for low-dimensional representation of the original configuration space. Second, an approximate inference algorithm was used, exploiting through the duality between control and estimation, to explore the decision space and to compute a high-quality motion trajectory of the robot. Utilizing the GP-LVM and the duality between control and estimation, we construct a fully probabilistic generative model with which we transform a high-dimensional motion planning problem into a tractable inference problem. Finally, we compute the optimal motion trajectory via an approximate inference algorithm based on a variant of the particle filter. Numerical experiments are presented to demonstrate the applicability and validity of the proposed approach. The resulting motions can be viewed in the supplemental video. ( https://youtu.be/DsZ1afN6pTY )

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For robotic motion planning, a trajectory is designed for robot states through a complex configuration space from an initial state to perform a given task. The planning problem is formulated as an optimal control (OC) problem considering the robot dynamics for a feasible motion trajectory and the cost function for the task. The optimal trajectory is reconstructed from the optimal cost-to-go function (also called the value function), which is obtained by solving the Bellman equation through dynamic programming procedures. Though these approaches guarantee the global optimality of the solution, they are not scalable with the dimensionality of the decision space because of the curse of dimensionality: the size of the decision space increases exponentially with the number of dimensions. Sampling-based algorithms such as RRT* 

[1] or FMT* [2], and their variants, are also widely used in the motion planning literature. These algorithms simultaneously construct and extend the approximate state space (represented in a graph or a tree structure), and then update the approximate solution. They are applicable to medium-sized problems but it is almost impossible to extend the graph into extremely high-dimensional space and obtain the solution without limiting the sampling space. Another option is a trajectory optimization based on iterative local optimization [3, 4]

. These approaches approximate the problem around the current solution using the first or second order Taylor expansion and update the solution iteratively. The solution from these approaches only guarantees local optimality from its nature. For high-dimensional problems, it is often required to give a valuable initial guess for the optimization, which is then painstakingly handcrafted by a user/designer, because the problem may have many poor local optima. In addition, the local approximation for all dimensions can cause the solution procedure to diverge, which enforces heuristic techniques to be used.

Given this limitations, one research question that may arise is whether or not the decision space we need to search should be this large. It may be that the dimension of the decision space really needed to be considered is small, if there is some prior structures in decisions [5]. Think about a motion planning process of our own body; when we have decided on which type of motion we are going to make, the sequence of our poses and the configurations we really need to consider may be limited. For example, we would not consider walking on our hands when the mission is to get to a certain goal position from a start position. That is, it is reasonable to assume that, although a robot has very high degrees of freedom, the valuable configurations take up a very small portion in the original space, and that they form some sort of (low-dimensional) manifolds. This is the spirit behind the latent variable model. There have been many studies aimed at finding the embedding of manifolds in high-dimensional space within a low-dimensional latent space. Specifically, the Gaussian process latent variable model (GP-LVM) assumes that there exists a probabilistic model for mapping from low-dimensional latent space to high-dimensional observation space, and then finds the mapping and the corresponding latent space [6]. Moreover, the Gaussian process dynamical model (GPDM) extends the GP-LVM such that dynamics in the observation space can be represented by that of the latent space [7]. Human motions in 50+dimensional configuration space have successfully been embedded into 2-3 dimensional latent space by GPDM, and the model has been utilized to generate new motions of a human character [7, 8, 9] and for 3-D tracking of people [10]

. Recently, with the successes in deep learning, there have been many attempts utilizing deep neural network to construct the latent variable model; one of the most popular algorithms is the variational autoencoder (VAE) 

[11]. The algorithm has also been applied to robotics applications: [12] exploited the idea of VAEs to embed dynamic movement primitives into the latent space. [13] used the conditional VAEs to learn a non-uniform sampling methodology of a sampling-based motion planning algorithm. [14] constructed a latent dynamical system from videos and utilized the learned model to compute an optimal control policy in a model-predictive control scheme. In [15, 16], we presented a variational inference method to train latent models parameterized by neural networks and utilized the learned model for planning in a similar way with this work.

One theoretical concept this work extensively takes advantage is the duality between the optimal control and estimation [17, 18, 19, 20, 21, 22]

. The idea is that, if we consider an artificial binary observation whose emission probability is given by the exponential of a negative cost, an OC problem can be reformulated as an equivalent inference problem. In this case, the objective is to find the trajectory or control policy that maximizes the likelihood of the observations along the trajectory. To address the transformed inference problem, several approximate inference techniques were utilized. If transitions between time steps are made approximately Gaussian 

[4] with the current control policy, the resulting algorithm becomes (locally) equivalent to the iterative linear quadratic Gaussian method [3]. In path integral control approaches, particles propagated by the current approximate control policy [19], or the control policy induced by a higher-level path planner [23], are used to approximate the resulting distribution of the inference. The user-designed probability model can also be utilized as a priori to solve the transformed inference problem [21]. Especially in [22], the planning problem is converted into the inference problem for a factored graph of a continuous motion trajectory represented by Gaussian process, and an incremental inference method, Bayes Trees, is utilized to efficiently perform replanning procedures. Finding a valuable proposal distribution is essential to the efficient inference method; just like finding a valuable search-space is essential for the planning problem.

I-a Overview and Contribution

Fig. 1: (a) The GPDM compresses high-dimensional sequential data, , into a low-dimensional dynamic system with , where and

are hyperparameters of the model. (b) The probabilistic model for optimal control; the optimal motion plan can be computed by inferring a posterior trajectory,

, given artificial observations, , encoding the cost function of control problem. (c, d) The fully-probabilistic model for optimal motion planning with the GPDM (hyperparameters are omitted for visualization sake).

This work addresses a motion planning problem of a robot with high-dimensional configuration space. The objective of the planning problem is to generate a sequence of configurations that achieves a given task and satisfies certain constraints, while being smooth in terms of the robot dynamics. Rather than solving the planning problem in the original configuration space, we construct a latent variable model with dynamics of much lower number of dimensions and then solved the problem utilizing them. The latent model with dynamics can be learned from experts’ demonstration data or from robot’s own experiences; this work particularly considers the cases where the demonstration data is available. The learned latent model shown in Fig. 1 represents a probabilistic mapping from the latent space to the configuration space , and of the stochastic dynamics in the latent space. A detailed description of latent variable models is given in Section II.

The model is combined with the probabilistic model for optimal control (shown in Fig. 1

) that is built using the duality between optimal control and Bayesian estimation. The intuition behind the duality is that the likelihood of a trajectory for an optimally controlled system is equivalent to the posterior probability when the cost related artificial observation is observed. (More details will be addressed in Section

III-A.) The combined fully-probabilistic model is shown in Figs. 1 and 1. With this, we can convert the original motion planning problem into an inference problem, where the objective is to find the maximum a posteriori (MAP) trajectory given the artificial observation, . This allows for efficient solution, because every piece of dynamic information is encoded in the low-dimensional latent space. Descriptions of the combined probabilistic model and proposed solution method are given in Section III-B and III-C, respectively. Finally, a multiscale acceleration method that increases the algorithm efficiency based on the path integral control is introduced in Section III-D. The proposed framework has several advantages; (a) it does not require knowledge of system dynamics, because this is learned from demonstration data; (b) rather than exploring the original configuration space [21], our framework uses the stochasticity of the latent space for exploration and this provides valuable search space of the motion planning problem; (c) when computing the most likely trajectory, our framework utilizes a Markov property of the resulting probability model. This provides a more efficient solution method than provided by naive optimization approaches, where the trajectory itself along all the time horizon is considered an (huge) optimization variable [8, 7].

Ii Constructing a Latent Variable Model

Ii-a Gaussian Process Latent Variable Model with Dynamics

The GP-LVM [6] is a generative model that represents a probabilistic mapping from a latent space to the observation space . Let be the observation matrix, where each row of the matrix represents a single high-dimensional observation of training sequential data, and be the matrix whose rows represent corresponding latent coordinations of the observations. In the GP-LVM, the mapping is assumed to be Gaussian process with a covariance function , and then the likelihood of the observation data is given by:

(1)

where is a kernel matrix whose components are computed as with kernel hyperparameters .

If the observations are assumed to be obtained from a dynamic system, it would also be possible to construct a probabilistic dynamic model in the latent space [7]. With this model, the likelihood of the latent variable sequence is given by:

(2)

where the kernel matrix is constructed by with kernel hyperparameters and is assumed to have a Gaussian prior. Given the prior of hyperparameters, and , a latent variable model with stochastic dynamics is constructed by maximizing the posterior which is proportional to the following joint probability:

(3)

Note that the GP-LVM is a generative model that ensures smooth mapping from the latent space to the observation space, i.e., the mapping from to is continuous and differentiable. Therefore, the learning process optimizes the model such that two points that are close together in the latent space are mapped to points that are close in the observation space [24]. Equivalently, two points that are far apart in the observation space cannot have close latent coordinates, which condition is called dissimilarity preservation, and the reverse is only guaranteed when mapping is linear. Preserving similarity is often considered more important in dimensionality reduction, because observations sparsely lie in high-dimensional space and then similar observations may contain more valuable information between them. In order to make the GP-LVM have the property of similarity preservation, back-constraints were introduced [24], which enforces a smooth mapping from the observation space to the latent space. For example, the kernel based regression model could be used as smooth mapping: where closeness in the observation space is measured by the kernel function . Any differentiable mapping (e.g., neural network) could be utilized here; then, the optimization process with respect to the latent coordination will simply turn into that with respect to the mapping parameters

using the chain rule. It is also possible to inject prior knowledge into latent space by restricting a structure of back-constraints 

[8].

Ii-B Latent Space Dynamical System

With the constructed latent variable model, we have the following stochastic dynamics in the latent space:

(4)

where the mean

and the variance

are given by the posterior of the Gaussian process:

(5)

and is a -dimensional standard Gaussian random noise. Here,

is a vector of which

th element represents

. Moreover, the corresponding pose is also normally distributed:

(6)

where the mean and the variance are given by

(7)

and is similarly defined as above. Combining (4) and (6), the graphical representation of the learned generative model is shown in Fig. 1.

Iii Optimal Motion Planning in High Dimension via Approximate Inference

Iii-a Optimal Control via Inference

Consider a passive and controlled stochastic dynamics,

(8)
(9)

respectively, and the cost rate

(10)

where is an instantaneous state cost rate that encodes a given task and is the Kullback–-Leibler (KL) divergence that penalizes a deviation of the controlled dynamics from the passive one. Then, a stochastic OC problem is formulated with the total cost:

(11)

The above OC problem can be solved by defining the value function,

and solving the Bellman equation on it. Especially, it is known that a solution of the above OC problem satisfies the linear Bellman equation on the exponentiated value function, called the desirability function, as:

(12)

where the linear operator denotes the average value of the function at the next time-step [25]. The optimally controlled dynamics is also obtained as:

(13)

Then, the probability of the trajectory when the state evolves with the optimal transition dynamics from the initial state is given as:

(14)

For the detailed derivation, we refer readers to [25, 20] and the references therein.

The OC problem above can be transformed into the Bayesian inference problem. Suppose we have an artificial binary observation

whose emission probability is given by exponential of a negative cost, i.e.,

(15)

The corresponding graphical model is shown in Fig. 1; the transition of the state is governed by and it emerges the observation with . Then, the probability of the trajectory given the initial state and the observation is given as:

(16)

From (14) and (16), we observe that the OC problem (8)–(11) is closely related to a Bayesian estimation problem that infers the state trajectory when the observation and the initial state is given (see Fig. 1[18, 20]. The trajectory (or state) distribution induced by the optimal control policy is equivalent to the posterior distribution of the trajectory in the inference problem. Once the inference problem is formulated, any approximate inference method, such as expectation propagation [4], particle belief propagation [21], or importance sampling [19, 23], can be utilized to solve the OC problem efficiently.

Iii-B Motion Planning as MAP Trajectory Estimation

The framework we propose in this work combines ideas of the latent variable model and the Bayesian interpretation of an OC problem to solve motion planning problems for high-dimensional systems. The graphical representation of a combined fully-probabilistic model for high-dimensional motion planning problems is shown in Fig. 1. When considering locomotions, the motion data used to learn the latent variable model do not include the global position and orientation of an agent, but only recode their variations, e.g. (angular) velocity, because the learned model needs to encode the dynamics of agent’s poses; once the sequence of poses are realized, the global position and orientation can be computed by integrating their variations. Let be the vector of the horizontal position, , and heading angle, , of the robot and denote components corresponding to the forward/lateral velocities and yaw rate of the robot111 trivially becomes an empty vector when the problem is not about locomotion.. Then, also has stochastic dynamics as:

(17)

where is a rotation matrix of the axis and follows the standard normal distribution. Together with the dynamics in the latent space, we define the augmented latent variable as , and then the arrows between the above-most nodes in Fig. 1 represent the temporal structure of the motion planning problem. Moreover, the latent state at each time step induces robot pose, as in (6). In the planning problem, a cost function, , takes robot pose as well as its global position and orientation as arguments and encodes specifications of a given task, e.g., collision with obstacles or distance from a goal region, etc. Therefore, induces the emission probability of an artificial observation in the proposed probabilistic model in Fig. 1 as:

(18)

In this work, we define the optimal motion plan as the most likely trajectory when the robot’s (stochastic) dynamics is optimally controlled. From the close relationship between (14) and (16), the problem of finding the most likely trajectory under the optimally controlled dynamics can be converted into the Bayesian estimation problem. For the transformed estimation problem, the objective is to compute the MAP trajectory with given initial latent and global coordinates and artificial observations :

(19)

Iii-C Dynamic Programming for Computing MAP Trajectory using Particle Filter

The estimation problem in (19) includes two sources of difficulty: long time horizon and continuous space. The first difficulty can be addressed by exploiting the Markov property of the probability model. Then, the posterior probability of the trajectory is factorized along the time axis as:

(20)

If the state space is a discrete set, the estimation problem above can be solved via a simple dynamic programming (DP) procedure called the Viterbi algorithm [26]. However, the observation space , as well as the latent space in our problem, is continuous; so we need a scheme for approximation of distributions over the continuous space. In this work, the approximate inference algorithm introduced in [27] is adopted to address such a difficulty. The algorithm basically discretizes the continuous state space by the particles obtained from the particle filter (PF) procedure which are expected to span valid regions of the state space due to the resampling procedure. While the PF procedure builds the approximate discrete state space, the Viterbi algorithm recursively computes the MAP trajectory along the approximated discrete space.

1: Initialization
2:for  do Recursion
3:     for  do
4:          
5:          
6:          
7:          
8:          
9:          
10:          
11:     end for
12:     
13:     Resample if
14:end for
15: Termination
16:
17:for  do Backtracking
18:     
19:     
20:end for
21:return
Algorithm 1 Viterbi algorithm with particle filtering

The proposed algorithm, shown in Algorithm 1, is based on the DP procedure like the Viterbi algorithm, and consists of forward recursion and backtracking. In the forward recursion, the algorithm constructs the discrete state space by propagating particles of the PF and computes the optimal (partial) trajectories using the DP recursion up to the current time-step. If the final time is reached, it finds the most-likely final state and the backtracking procedure constructs the optimal trajectory from the chosen state in the backward direction. In detail, through the forward recursion, the augmented latent variables are propagated as in (4) and (17), and the corresponding pose is also realized (line 4–6). We restrict the noise in the GP realization (6) to , i.e., , because this noise only makes resulting motions rougher. However, this restriction can be easily removed if more complex poses are necessary for the planning. If we take the log to (20), the equation is simplified further with summation as:

(21)

with constraints, . Then the parent nodes that maximize the log-posteriors up to the current state are determined and the log-posteriors of the partial trajectories are computed (line 7–9). Note that in line 7, has a simple closed form because the dynamics of and

follows the Gaussian distributions as in (

4) and (17), respectively. By computing the weights of particles in line 10, the algorithm can perform resampling procedure which helps the discrete approximation of the space to span valid regions in the original continuous space; the states having low posterior probabilities (or equivalently, having high costs in the planning problem) up to the current time-step tend not to be re-sampled (line 12–13). After the forward recursion, the algorithm picks the most likely final state (line 15–16), and then constructs the whole trajectory with backtracking by looking at its ancestry (line 17–20).

Iii-D Multiscale Acceleration with Path Integral Control

The proposed algorithm approximates the continuous state space at time by using the PF up to that time. If only a small number of particles are used in the algorithm, they cannot expand the valid state space as shown in Fig. 5(a), where all particles failed to reach a goal region. Consequently, when a problem has a complex cost function (induced by environmental or task complexity), we cannot help but use a large number of particles to span the space effectively. Having a small number of particles, however, is crucial because the computational complexities of the PF and the DP procedures are and , respectively. If particles are guided into the better region with useful heuristic, better sample efficiency can be achieved. This is also closely related to the idea of auxiliary PF [28], where the particles are propagated considering the future measurement. Trivially, the best sample efficiency can be achieved by propagating particles using the optimal control policy of the original OC problem [29], but it is, of course, unrealistic because we do not have the optimal control policy before solving the problem.

1:Input: approximate solution at level problem, .
2:Create by up-sampling by a factor of .
3:for  do
4:     for  do
5:          
6:          
7:          
8:          Append into
9:          
10:          
11:     end for
12:     
13:     Resample if
14:end for
15: Equation (25)
16:return
Algorithm 2 PI control with PF at level

In this section, we introduce an efficient multiscale method that guides particles to better regions. With the discrete time-step , , and with slight abuse of notation, , , the OC problem in (8) - (11) can be viewed as the discretized version of the following continuous-time OC problem:

(22)
(23)

because the KL-divergence term in (11) can be interpreted as the quadratic control cost, i.e., 222Note that the KL-divergence between Gaussian distributions having difference but same is given by and, in the case of (22), and , which results in this interpretation.. The idea of the proposed method in this section is to sequentially solve approximate discretized problems with different time-steps in the coarse to fine direction, where a solution of a coarser approximate problem is utilized as a priori of a finer approximate problem. Because the discrete time-length is shorter in the coarser approximate problem, a larger number of particles can be used while maintaining the computational complexity. We adopt the path integral control method to obtain the optimal control which also can be solved efficiently within the “optimal control via inference” perspective.

Let the original OC problem be level 0 and be the number of time steps aggregated in level for . That is, level OC problem is the discretized version of the continuous-time OC problem (22)-(23) with time-step . The level dynamics is guided by the level optimal control and therefore is given by , where

(24)

and is a discrete time-length. Suppose a set of simulated trajectories and weights

is obtained from the PF, which approximates the optimally controlled trajectory distribution. The path integral control algorithm computes the optimal control solution that minimizes the KL divergence between the controlled trajectory distribution and the optimal one with moment matching 

[30], and the optimal control is given as:

(25)

Such a procedure at level is summarized in Algorithm 2. First, the approximate solution at the upper level is up-sampled by a factor of (line 2), where at the coarsest level. Guided by this, the algorithm propagates particles (line 5–7), realizes poses of robot (line 9), and evaluates particles’ weights (line 10, 12). The algorithm stores the particles’ coordinates and the noises that propagated particles in the form of a trajectory, and re-samples them when necessary (line 8 and 13, respectively). After the filtering procedure, the approximate optimal control is computed as (25), stretches it back by a factor of and returns it (line 15–16). With this procedure, the optimal control is sequentially computed from the coarsest level, , to the finest level, . Then finally, the finest solution will be utilized to guide the particle propagation of Algorithm 1; that is, in the line 6 of Algorithm 1, the mean of the particle propagation is changed into . This guidance is expected to lead that only a small number of particles are enough to expand the valid state space.

Fig. 2: Resulting trajectories for the experiment 1. The MAP trajectory with the cost function (a) without and (b) with collision checking.
Fig. 3: Resulting trajectories for the experiment 2 from various initial configurations. The goal region is marked by the black circle. The poses are drawn for every (a–c) 30 and (d) 15 steps, i.e., 1 and 0.5 second.
(a) Initial
(b) Learned
(c) Initial
(d) Learned
Fig. 4: Representation of various motions in the 4-D latent space projected to the subspace. The points are colored by (a, b) yaw rate and by (c, d) forward velocity of the Mocap data.

Iv Example: Humanoid Locomotion Planning

Iv-a Training Data and Hyperparameter Settings

As an illustrative example, we specifically consider a humanoid robot with -dimensional configuration space. The Carnegie Mellon University motion capture (CMU mocap) database is used to learn the latent variable model. The 56-dimensional configurations consist of angles of all joints, roll and pitch angles, vertical position of center of the spine (the root), yaw rate of the root, and horizontal velocity of the root. The original data were written at 120 Hz, but we down-sampled them into 30 Hz to decrease the size of the observation set while maintaining the quality of motions for capturing the valid temporal information.

For priors of the hyperparameters, we utilized the commonly-used radial basis function (RBF) for

and , i.e., and , where is a Kronecker delta function. As is widely used, we used the uninformative prior on the kernel hyperparameters as , which has shown effective regularizations [6, 7]. We also used the back-constraints introduced in [8]: because the locomotion has some periodicity, the corresponding latent variable model also does. First, we extracted the phase of the motion and augmented it to the observation . Then, the back-constraints were used such that the last two latent dimensions had periodic structures as: and Moreover, the standard RBF back constraints were used in other dimensions. The GPmat toolbox [31] is utilized to learn the latent variable models.

Iv-B Results

For the first numerical experiment, we trained the model using the data sets for walking, fast walking and jogging. The latent space was set to be -D, where the first and the last two dimensions were initialized as being associated with the forward velocity of the root and the phases, respectively. Note that the optimization problem of GP-LVMs is non-convex and the learning algorithm is based on the gradient descent method [6], so proper initialization is crucial. The task for the first experiment was to move forward without stepping on the red lines shown in Fig. 2. We set the cost function to create a penalty for the deviation from a desired heading angle, , a desired position, , forward velocity, , and when the foot touched the red lines, i.e., The foot positions were computed through the forward kinematics: where is the function of the forward kinematics for the foot joints. 500 particles were used in the Algorithm 1 with time horizon (i.e., sec.). In addition, we ignored the randomness in the last 2 dimensions of latent space (which are for phase), i.e., only the learned mean dynamics is used for phase as , because we found that the noise in those dimensions made the resulting motion too jerky and was not helpful for the planning. Figs. 2 (a) and (b) show the MAP trajectory without and with collision checking, respectively. It is shown that the proposed algorithm could find the natural movements that step over the forbidden regions while the motion from the passive dynamics steps on the red lines. In addition, even though our training data consisted of around 3-4 cycles for each motion, the learned generative model produced natural longer movements continuously.

For the second experiment, we added the datasets for left and right turns to see the trajectory make detours around obstacles. The latent space was set to be -D here, where the first two dimensions were initialized as corresponding to the yaw rate and forward velocity of the root, while the last two dimensions were for the phases, as in the first example. The cost function penalized the situations when the robot collided with any obstacle, or left the domain, or reached positions too the distant from the goal region, i.e., where and are if and , receptively, and are otherwise; is the square of the distance to the goal region, computed using the FMT* algorithm [2]. The domain and the goal region were set to be and around , respectively. 1000 particles were used in the Algorithm 1 with time horizon (i.e., sec.) for walking and running tasks, respectively. We ran our algorithm for the cases of various initial positions and orientations and the resulting motion trajectories are shown in Fig. 3. It is shown that the proposed algorithm was able to generate smooth and natural motion sequences toward the goal region without collision. Fig. 4 shows the learned -D latent space constructed by various motions. It is observed that, while varying from the initial guess (in Figs. 4(a) and (c)), the motions are well-organized in the latent space (in Figs. 4(b) and (d)). We observed that the learned model separated the walking and running motions into two clusters and the transition motion from stand to running was embedded as the small “bridge” between them. As a result, the learned generative model hardly produced the transition between walking and running; we expect that more various transition motions are necessary to make more flexible models. It can be also seen from the supplementary video that the initialized latent variable model cannot generate proper natural motions. We would refer the readers to the supplementary video for more visualization.

Fig. 5: The particles from (a) the naive PF approach with 50 particles and from the recursive multiscale procedure with (b) (c) (d) (e) .
Case PF DP CPU (sec) Env. 1 Env. 2 Env. 3
13.25 100% 23% 9%
140.16 100% 82% 77%
262.46 100% 98% 95%
Multi-Scale 87.17 100% 85% 85%
TABLE I: Computational complexity and success rate

Finally, we tested multiscale acceleration method. The multiscale parameters were set as and and particles are used in the original scale. These multiscale parameters should be chosen carefully, because the approximate solution (25) may fail to guide the finer level dynamics properly if a gap between the scales is too large. Figs. 5(b)-(e) depict the particles from the proposed recursive procedure. It is observed that as the approximation level decreases, smaller number of particles are well-guided to the goal region; the larger number of particles successfully expanded the state space at the higher level, and then only 50 particles succeeded to be propagated toward the goal at the original scale. We also compared the performances of Algorithm 1 with various numbers of particles () in various environments having different obstacles: the environment 3 is same as that of Fig. 3(a), the environment 2 only had 2 obstacles close to the goal region, and the environment 1 had no obstacles; the initial state and the goal region were same as those of Fig. 3(a). The comparison result is shown in Table I. The second, third and fourth columns denote the relative computational complexities of PF, i.e. , DP, i.e. , and computation time for whole planning procedures, respectively. Note that, for the algorithm with the multiscale acceleration, the computational complexity of DP is same to the case of because the same number of particles are used, and the computational complexity of PF is times larger than the case of because those are twice at each level and same at the original level. We found that most computational budgets were spent by GP realizations (around ) and forward kinematics for collision checking (around ), which is proportional to the number of data , particles , and the planning horizon . Therefore, the planing algorithm can be accelerated further by using ideas of sparse GP-LVMs [6] and more efficient forward kinematics and collision checking methods; we leave it as future works. The last three columns show the rate of each algorithm that at least one of particles achieved the goal region. Table I shows that though a larger number of particles need to be used as the environment becomes complex, which results in huge computational complexity, the proposed multiscale method can guarantee the performance while maintaining the computational complexity properly.

V Conclusions

In this work, we proposed an efficient framework for generating the motion trajectory of a robot with high degrees of freedom. The framework included a probabilistic generative model for the motion sequence from demonstration data using the GP-LVM method. The constructed low-dimensional model was then combined with the probabilistic model of the OC problem. Finally, we proposed an efficient approximate MAP trajectory estimation algorithm modified to utilize the dynamic programming procedure and the multiscale path integral control method to increase the sample efficiency.

References

  • [1] S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimal motion planning,” The International Journal of Robotics Research, vol. 30, no. 7, pp. 846–894, 2011.
  • [2] L. Janson, E. Schmerling, A. Clark, and M. Pavone, “Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions,” The International Journal of Robotics Research, pp. 883–921, 2015.
  • [3] E. Todorov and W. Li, “A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems,” in American Control Conference, 2005. Proceedings of the 2005.   IEEE, 2005, pp. 300–306.
  • [4] M. Toussaint, “Robot trajectory optimization using approximate inference,” in

    Proceedings of the 26th annual international conference on machine learning

    .   ACM, 2009, pp. 1049–1056.
  • [5] P. Vernaza and D. D. Lee, “Learning and exploiting low-dimensional structure for efficient holonomic motion planning in high-dimensional spaces,” The International Journal of Robotics Research, vol. 31, no. 14, pp. 1739–1760, 2012.
  • [6]

    N. Lawrence, “Probabilistic non-linear principal component analysis with gaussian process latent variable models,”

    Journal of machine learning research, vol. 6, no. Nov, pp. 1783–1816, 2005.
  • [7] J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models for human motion,” IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 2, pp. 283–298, 2008.
  • [8] R. Urtasun, D. J. Fleet, A. Geiger, J. Popović, T. J. Darrell, and N. D. Lawrence, “Topologically-constrained latent variable models,” in Proceedings of the 25th international conference on Machine learning.   ACM, 2008, pp. 1080–1087.
  • [9] S. Levine, J. M. Wang, A. Haraux, Z. Popović, and V. Koltun, “Continuous character control with low-dimensional embeddings,” ACM Transactions on Graphics (TOG), vol. 31, no. 4, p. 28, 2012.
  • [10] R. Urtasun, D. J. Fleet, and P. Fua, “3D people tracking with gaussian process dynamical models,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 1.   IEEE, 2006, pp. 238–245.
  • [11] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” International Conference on Learning Representationsm, 2014.
  • [12] N. Chen, M. Karl, and P. van der Smagt, “Dynamic movement primitives in latent space of time-dependent variational autoencoders,” in Humanoid Robots (Humanoids), 2016 IEEE-RAS 16th International Conference on.   IEEE, 2016, pp. 629–636.
  • [13] B. Ichter, J. Harrison, and M. Pavone, “Learning sampling distributions for robot motion planning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018.
  • [14] C. Finn and S. Levine, “Deep visual foresight for planning robot motion,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on.   IEEE, 2017, pp. 2786–2793.
  • [15] J.-S. Ha, Y.-J. Park, H.-J. Chae, S.-S. Park, and H.-L. Choi, “Adaptive path-integral approach for representation learning and planning,” International Conference on Learning Representations Workshop, 2018.
  • [16] ——, “Adaptive path-integral approach to representation learning and planning for dynamical systems,” arXiv preprint arXiv:1807.02128, 2018.
  • [17] E. Todorov, “General duality between optimal control and estimation,” in Decision and Control, 2008. CDC 2008. 47th IEEE Conference on.   IEEE, 2008, pp. 4286–4292.
  • [18]

    K. Rawlik, M. Toussaint, and S. Vijayakumar, “On stochastic optimal control and reinforcement learning by approximate inference,” in

    Robotics: science and systems, 2012.
  • [19] E. Theodorou, J. Buchli, and S. Schaal, “A generalized path integral control approach to reinforcement learning,” The Journal of Machine Learning Research, vol. 11, pp. 3137–3181, 2010.
  • [20] E. Todorov, “Finding the most likely trajectories of optimally-controlled stochastic systems,” IFAC Proceedings Volumes, vol. 44, no. 1, pp. 4728–4734, 2011.
  • [21] P. Hämäläinen, J. Rajamäki, and C. K. Liu, “Online control of simulated humanoids using particle belief propagation,” ACM Transactions on Graphics (TOG), vol. 34, no. 4, p. 81, 2015.
  • [22] M. Mukadam, J. Dong, X. Yan, F. Dellaert, and B. Boots, “Continuous-time gaussian process motion planning via probabilistic inference,” arXiv preprint arXiv:1707.07383, 2017.
  • [23] J.-S. Ha and H.-L. Choi, “A topology-guided path integral approach for stochastic optimal control,” in IEEE International Conference on Robotics and Automation (ICRA), 2016.
  • [24] N. D. Lawrence and J. Quiñonero-Candela, “Local distance preservation in the GP-LVM through back constraints,” in Proceedings of the 23rd international conference on Machine learning.   ACM, 2006.
  • [25] E. Todorov, “Efficient computation of optimal actions,” Proceedings of the national academy of sciences, vol. 106, no. 28, 2009.
  • [26] D. Koller and N. Friedman, Probabilistic graphical models: principles and techniques.   MIT press, 2009.
  • [27] S. Godsill, A. Doucet, and M. West, “Maximum a posteriori sequence estimation using monte carlo particle filters,” Annals of the Institute of Statistical Mathematics, vol. 53, no. 1, pp. 82–96, 2001.
  • [28] M. K. Pitt and N. Shephard, “Filtering via simulation: Auxiliary particle filters,” Journal of the American statistical association, vol. 94, no. 446, pp. 590–599, 1999.
  • [29] S. Thijssen and H. Kappen, “Path integral control and state-dependent feedback,” Physical Review E, vol. 91, no. 3, p. 032104, 2015.
  • [30] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou, “Aggressive driving with model predictive path integral control,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1433–1440.
  • [31] N. D. Lawrence, M. Alvarez, D. Luengo, A. Honkela, J. Peltonen, C. H. Ek, M. Titsias, N. J. King, A. C. Damianou, P. Gao, M. Rattray, T. de Campos, A. Baisero, R. Floyrac, P. Sauer, A. J. Moore, A. A. Kalaitzis, A. Damianou, and T. P. Centeno, “GPmat.” [Online]. Available: https://github.com/SheffieldML/GPmat