Efficient Hierarchical Robot Motion Planning Under Uncertainty and Hybrid Dynamics

02/12/2018 ∙ by Ajinkya Jain, et al. ∙ The University of Texas at Austin 0

Noisy observations coupled with nonlinear dynamics pose one of the biggest challenges in robot motion planning. By decomposing the nonlinear dynamics into a discrete set of local dynamics models, hybrid dynamics provide a natural way to model nonlinear dynamics, especially in systems with sudden "jumps" in the dynamics, due to factors such as contacts. We propose a hierarchical POMDP planner that develops locally optimal motion plans for hybrid dynamics models. The hierarchical planner first develops a high-level motion plan to sequence the local dynamics models to be visited. The high-level plan is then converted into a detailed cost-optimized continuous state plan. This hierarchical planning approach results in a decomposition of the POMDP planning problem into smaller sub-parts that can be solved with significantly lower computational costs. The ability to sequence the visitation of local dynamics models also provides a powerful way to leverage the hybrid dynamics to reduce state uncertainty. We evaluate the proposed planner for two navigation and localization tasks in simulated domains, as well as an assembly task with a real robotic manipulator.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the biggest challenges in robot motion planning is to develop feasible motion plans for systems having highly nonlinear dynamics in the presence of partial or noisy observations. Often, these nonlinearities are caused by sudden transitions or discontinuities in the dynamics (for example, due to contacts in a robot manipulation task). When task dynamics change suddenly in state space, even small state estimation errors can lead to large deviations and plan failure. Therefore, reasoning about uncertainty over states becomes crucial in order to develop robust motion plans. Planning problems under uncertainty are often represented as a

partially observable Markov decision process

(POMDP) [1]. POMDP problems have been shown in literature to be PSPACE-complete [2], making exact planning intractable.

To make planning tractable, POMDP planners typically leverage various types of approximations [3, 4, 5, 6, 7, 8, 9] or structural assumptions [8, 10, 11, 12, 13] that simplify the problem. In this work, we propose to leverage a natural, simplifying assumption that the nonlinear dynamics of robot motion planning tasks can be decomposed into a discrete set of simpler local dynamics models, of which only one is active at any given time (e.g. a change in dynamics due to contact). Note that these local dynamics models may be approximate, especially when they are learned from data or are a simplification of a complex underlying model. A complete dynamics model can then be defined as a hybrid dynamics model having hybrid states comprised of the continuous states of the system along with a discrete state denoting the active local dynamics model.

The primary contribution of this work is a novel POMDP planner that plans in a hybrid belief space, allowing for efficient information gathering and planning under uncertainty with hybrid dynamics. We define a hybrid belief to represent uncertainties over the robot state and the active dynamics model and formulate hybrid belief propagation equations. Using the hybrid belief representation, a hierarchical POMDP motion planner is presented that solves the POMDP problem by dividing it into two levels: at the higher level, discrete state plans are generated to find a sequence of local models that should be visited during the task, and at the lower level, these discrete state plans are converted into cost-optimized continuous state belief-space plans.

The biggest advantage of dividing the planning problem into two levels is that it breaks long-horizon planning problems into multiple smaller segments that can be sequenced to find a complete solution. Since POMDP planning becomes exponentially more difficult with longer horizons ([2]), a hierarchical approach breaks the problem into chunks that can be solved with significantly less effort. Another major benefit of discrete state planning is that the planner can chose to leverage a specific local dynamics model in order to improve the effectiveness of the generated plans. For example, if it is known a priori that in the

-th local dynamics model, motion is allowed only along a particular vector (e.g. due to presence of a wall), it can be used to reduce the state uncertainty along the dimensions orthogonal to the allowed motion vector. This indirect feedback for uncertainty reduction is critical for tasks in which observations are highly noisy, or even entirely unavailable (for example, due to occlusions).

1.1 Related Works

Broadly, POMDP solving approaches can be divided into two categories based on whether their state, action and observation spaces are discrete or continuous. Discrete space POMDP solvers, in general, either approximate the value function using point-based methods [6, 7] or use Monte-Carlo sampling in the belief space [14, 15] to make the POMDP problem tractable. Continuous space POMDP solvers often approximate the belief over states as a distribution having finite parameters (typically Gaussian) and either solve the problem analytically using gradients [11, 12, 13] or use random sampling in the belief space [8, 9]. Other approaches have also extended point-based methods to continuous domains [7].

Discrete space POMDP solvers have been shown to be able to successfully plan for large discrete space domains. However, continuous space domains are infinite-dimensional, and discrete space solvers often fail to find feasible solutions for planning horizons longer than a few steps [15]. Among continuous space POMDP solvers, Agha-Mohammadi et al. [8] and Hollinger and Sukhatme [9] have proposed sampling based methods that can find effective solutions even in complex domains. However, most sampling based methods suffer from the problem of obtaining sub-optimal solutions and can only be probabilistically optimal at best [16]. POMDP solvers for hybrid domains, such as the one discussed in this work, have been previously discussed by Brunskill et al. [4], Sreenath et al. [10] and Agha-mohammadi et al. [17]. In the most closely related work to ours, Brunskill et al. [4] proposed a point-based POMDP planning algorithm, the SM-POMDP planner, which approximates the nonlinear system dynamics using a multi-modal hybrid dynamics model. However, unlike our POMDP planner, the SM-POMDP planner plans only in the continuous domain and the discrete states are obtained “passively” using the switching conditions.

Pineau et al. [3] and Toussaint et al. [5] have also previously proposed hierarchical POMDP planners. The planner developed by Pineau et al. [3] leverages a human-designed task hierarchy to reduce problem complexity, while Toussaint et al. [5]

emphasizes automatic discovery of hierarchy in state space using a dynamic Bayesian network. Although such approaches can work well for some robot control tasks, we believe that a more natural hierarchy of subtasks emerges automatically if a hybrid dynamics model is used to represent tasks with nonlinear dynamics, such as robot manipulation tasks involving contacts.

As hybrid dynamics models are very effective in modeling nonlinearities that are due to sudden transitions in the dynamics, a natural application for the proposed POMDP solver is contact-rich robotic manipulation. One of the current approaches for solving the robot manipulation planning problem is to search for an optimal sequence of parameterized manipulation actions or primitives to perform the task [18, 19]. Kroemer et al. [19] have proposed to represent primitives for different phases (modes) of a multi-phase manipulation task using dynamic movement primitives (DMPs) and learn a library of such manipulation skills which can be sequenced to perform a task. Unfortunately, a lack of a task dynamics model prevents these methods from generalizing to novel manipulation tasks, e.g. having different cost functions, even if it involves the same objects.

2 Background

POMDPs: Partially Observable Markov Decision Processes (POMDPs) provide a mathematical framework for the problem of sequential decision making under uncertainty [1]. Let be the space of all possible states x of the robot, be the space of all possible control inputs u and be the space of all possible sensor measurements z the robot may receive. To account for state uncertainty, a distribution of the state of the robot given all past control inputs and sensor measurements is defined as the belief , given as:

(1)

where , and are the robot’s state, control input and received measurement at time step , respectively and represent the space of all possible beliefs. In the general case, considering a stochastic dynamics and observation model for the process:

(2)

For a given control input and a measurement , belief propagation using Bayesian filtering can be written as:

(3)

where is a normalizing constant independent of .

Hybrid Dynamics: A hybrid dynamics model of a system is a dynamics model in which the states of the system evolve with time over both continuous space and a finite set of discrete states [20]. Each discrete state of the system corresponds to a separate dynamics model that governs the evolution of continuous states. These types of dynamical models are sometimes referred to as switched dynamical systems in the literature [21].

In a hybrid model, discrete state transitions of the system can be represented as a directed graph with each possible discrete state corresponding to a node and edges () marking possible transitions between the nodes. These transitions are conditioned on the continuous states. A transition from the discrete state to another state happens if the continuous states are in the guard set of the edge where , and is the power set of . Thus, for each discrete state , in a hybrid dynamics model we can define:

(4)

where , , , and

are the continuous state, control input, observation variables, state dynamics and observation functions respectively. Evolution of the discrete state of the system can be modeled by a finite state Markov chain. Defining the state transition matrix as

, the discrete state evolution can be given as:

(5)

3 Hierarchical POMDP Planner

We propose to solve the problem of motion planning under uncertainty for tasks governed by highly nonlinear dynamics as a POMDP problem defined on a hybrid dynamics model. Different local dynamics models constituting the task dynamics are represented as distinct discrete states of the hybrid model. Under uncertainty over the robot state, a separate discrete distribution needs to be maintained to represent our confidence over the active local dynamics model at each time step. Jointly, a hybrid belief over the hybrid state of the system can be defined with a continuous part representing uncertainty over the robot state and a discrete part representing uncertainty in the active local dynamics model. In this work, we assume that the continuous part of hybrid belief is represented by a mixture of Gaussian distributions, each having a mixing weight of , given as:

(6)

3.1 Belief Propagation under Hybrid Dynamics

A hybrid belief is defined as , where and correspond to the belief over continuous robot state, , and discrete states, , respectively. Propagation of hybrid beliefs using Bayesian filtering can be separated into two steps: making a prediction using the dynamics model to obtain a belief prior and updating it based on the received observation to compute the belief posterior.

3.1.1 Belief Prior

We extend the system dynamics, , for uncertainty propagation and represent it as . At each time step , we can propagate the current belief through the system dynamics of each discrete state individually and then take a weighted sum of the propagated belief set to obtain a belief prior for the next time step , as:

(7)

where is -th component of , and , and represent the continuous states, discrete state, and continuous control input to the system at time , and is denoted as . Note that represents a general dynamics function and can be stochastic. Under stochastic continuous state dynamics, the definition of the discrete state transition matrix as given in Equation 5 needs to be extended. Assuming the transitions of discrete states are given by a directed graph with self-loops, we can define the extended discrete state transition matrix at time as where

(8)

where is an indicator function defined as:

(9)

where is a normalization constant, and

is a small probability to handle cases when received observations do not correspond to any legal discrete transition. Calculating the extended discrete state transition matrix

at each time step using Eq. 8 can be computationally expensive. An approximation of can be obtained by sampling random points from the belief over continuous states and calculating ratio of points lying in the guard set to the total number of sampled points for each discrete state .

3.1.2 Belief Posterior

We use a hybrid estimation algorithm based on Bayesian filtering to reduce the uncertainty over states using noisy continuous state observations. The proposed algorithm consists of two layers of filters: the first estimates the continuous states of the system and the second estimates the discrete states of the system. Upon receiving observation

, the continuous state prior is updated by taking a weighted sum of a bank of extended Kalman filters running independently, with each discrete mode having an individual filter. The weights for the sum are determined using the prior for the discrete mode

. The complete update step for continuous states can be written as:

(10)

where is the Kalman Gain for discrete state at time and is -th component of . The update for the discrete state can be obtained by using a Bayesian filter update given as:

(11)

where , is the element-wise multiplication operator, is a normalization constant and

(12)

where is the observation function for state . Mixing weights for the mixture of Gaussians are also updated based on the received observations as

(13)

A new mixture of Gaussians is then chosen to represent the continuous belief at next step.

3.2 Direct Planning

With the hybrid belief propagation equations defined, we can now use trajectory optimization technique to solve the POMDP. We assume maximum likely observations (MLO) obtained by propagating the current belief over continuous states through the system dynamics (Eqn. 7) as the true observations for developing locally optimal motion plans, as introduced by Platt et al. [22]. In this work, the nonlinear optimization problem set up for trajectory optimization is posed as a sequential least squares programming (SQP) problem and solved using the SNOPT software package [23, 24]. We denote this approach as the direct planning approach.

1 Function  (high-level plan)
2        for each in high-level plan do
3               Define corresponding full-confidence vector, =
4               Sample random points: = ;
5               for each  do
6                      Find confidence distribution on discrete states :
7                          Sample a random set ;
8                          for each  do
9                                 ;
10                            
11                      Find cost of divergence : = ;
12                     
13              Define cost map on complete domain : (x

) = Interpolate(

);
14               Find best representative point in continuous state: = (x, );
15               Append to ;
16       return ;
Algorithm 1 High-Level Plan Continuous State Goals

3.3 Hierarchical Planner

Although the direct planning approach can be used to solve the POMDP, planning for longer horizons in complex tasks, such as contact-rich manipulation tasks, can result in infeasible computational costs [2]. To tackle this challenge, we propose a hierarchical planner that decomposes the POMDP problem into smaller subproblems which can be solved with significantly less effort.

The proposed hierarchical planner has two levels: a higher level to find the best sequence of local dynamics models that should be visited along the path (by visiting corresponding regions in continuous state space) and a lower level that is similar to the aforementioned direct planning approach. The higher level planner generates a set of candidate high-level plans consisting of all feasible permutations (without repetitions) of the discrete states of the task111The feasibility check also helps in keeping the POMDP tractable. Gulyás et al. [25]

have shown that the average path length for a connected graph decreases as its graph connectivity increases. If the graph of discrete states, from which the set of feasible high-level plans is derived, is not sparse enough to solve the POMDP tractably, a simple heuristic can be defined that penalizes plans with longer path lengths. Preferential choice of shorter plans results in fewer calls to the lower level planner and reduces computational time.

. A transition between two discrete states is deemed to be an infeasible transition, if the regions of the continuous state space corresponding to the two discrete states form a pair of positively-separated sets.

We define the term confidence to denote the probability of a continuous state belief to be in a particular discrete state. Spatial distribution of confidence across the continuous domain for a particular discrete state is defined as the confidence map associated with that state. A confidence map for a particular discrete state can be converted into a cost map by calculating a cost of divergence between a full-confidence vector (, one-hot vector with probability of being in that particular state equals to one) and the confidences at randomly sampled points across the domain. A high-level plan can then be converted into a sequence of continuous state goals by finding the global minimum of such cost maps associated with each discrete state in the plan (see Algorithm [1]). The lower level planner is then called for each of these continuous state goals and a complete continuous state path for the high-level plan is generated by combining the outputs of lower level planner. An additional discrete state is added to each high-level plan which represents the desired goal of the task and is considered to be active within an neighbourhood of the actual task goal. High-level plans are then ranked by calculating a divergence cost on the distribution of planner’s confidence on the active discrete state at the final point of the plan and the desired confidence distribution (all the probability mass within the neighbourhood of the goal). The continuous state plan corresponding to the high-level plan with the minimum cost is chosen to be executed.

In this work, we have used Hellinger distance to calculate the divergence cost [26]

between the discrete distributions as it forms a symmetric bounded metric with a value between 0 and 1, and was found to be more numerically stable than the Bhattacharya distance, KL-divergence, and Jensen–Shannon divergence on the tested application domains. Radial basis functions were used to interpolate the divergence costs throughout the domain and the differential evolution method was used to find the approximately globally optimal solutions of the generated cost map

[27].

3.4 Trajectory Stabilization

With the MLO assumption, it is very likely that during execution the belief over robot state will diverge from the nominal trajectory planned. To ensure that the execution phase belief follows the plan, a belief space LQR (B-LQR) controller can be defined around the nominal trajectory. B-LQR controllers were introduced by Platt et. al [22] and can be seen as belief-space extension of Linear-Quadratic Regulators (LQR). For systems modelled as linear-Gaussian processes, a B-LQR controller is optimal and equivalent to a linear-Quadratic Gaussian (LQG) controller. In B-LQR, each point in the nominal trajectory is defined as a set point and quadratic costs are defined for the distance from it and the control effort required to converge to it. Closed form solutions exist to ensure convergence to the set point within a finite time horizon. While stabilizing the trajectory, the most likely active discrete state is taken to define the governing dynamics of the system. However, it may happen that the B-LQR controller is unable to stabilize the execution phase (actual) belief around the nominal trajectory. If the planned belief for the next step deviates more than a -threshold from the actual belief after the observation update, a replanning call to the planner is triggered.

4 Experiments

4.1 Domain-I: Walled Domain

The first task is an autonomous navigation task in a 2D domain () having extremely noisy observations (zero-mean Gaussian noise ). The domain consists of two perpendicular walls parallel to the x and y axis respectively. As the motion along a wall is constrained to be only parallel to the wall, the robot can use it to efficiently localize itself in a direction orthogonal to the wall. We compare the performance of the hierarchical planner with the direct planning approach. Note that the direct planning approach is similar in principle to the SM-POMDP planner proposed by Brunskill et al. [4] and hence, provides a comparison of the proposed hierarchical planner with a flat, single-level planning approach. Hybrid dynamics model can be given as

where . The observation function was defined as .

Sample trajectories planned by the direct planning and the hierarchical planner are shown in Figure 1. It is evident from the figures that the hierarchical planner plans to selectively visit the two discrete states representing the walls, in contrast to the direct method. Also, the hierarchical planner is able to converge to the goal faster and with a much lower uncertainty than the direct planning approach. As the direct planner does not leverage the knowledge of local dynamics models in a structured way, it needs to plan longer trajectories to gather more information. However, due to high noise in the observations, it still fails to converge to the goal with high accuracy.

Additional statistical analysis to compare the two approaches in terms of total planning time, final error and final belief uncertainty are presented in Table 1. It can be seen from Table 1 that, for comparable final error and final belief uncertainty, the hierarchical planner is able to find a solution approximately times faster than the direct planning approach.

(a) Direct Planning
(b) Hierarchical Planning
Figure 1: A comparison of planned and actual trajectories using the direct planning and hierarchical planning approaches on the walled domain. For both cases, Initial belief mean , , True start position:={3.5, 2.0}. Gray circles represent belief covariance.
Metric Direct Hierarchical
Average Total time (in seconds) 51.908 10.695
Average Final Error
Average Final Maximum Belief Uncertainty 0.696 0.625
Table 1: Comparison of direct and hierarchical planning. Values are averaged over 5 runs. Planning horizon: 20 steps. Belief start: . actual start: . Termination condition: Maximum likelihood estimate of belief converged within a ball of radius around the goal () with max covariance of .

4.2 Domain-II: Airplane assembly

We experimentally demonstrate that the hierarchical POMDP planner can be used to tractably solve a real world manipulation task — the partial assembly of a toy airplane from the YCB dataset [28]. We considered the first step of inserting the landing gear into the wing as a test case for our planner. The task requires a high precision with maximum tolerance of . Feedback on the location of the airplane in world was noisy and had an average estimation error of . This experiment demonstrates two important features of the proposed planner: first, the planner can be scaled to solve real-world manipulation planning under uncertainty problems and second, due to the hierarchical planning approach, the planner essentially enables the robot to plan and “feel around” to localize itself when observations are noisy, similar to what a human might do.

In a robot manipulation task involving contacts, based on the type of contact between the bodies, the number of state-dependent local dynamics models can be large, or even infinite. We simplify the problem by assuming an approximate hybrid dynamics model, in which the local dynamics models correspond to possible motion constraints that the robot can encounter while executing the task. For example, the task of placing a cup on a table can be considered to be approximately made of two local dynamics models: one when the two objects are not in contact and the other when the cup is in contact with the table plane. The second dynamics model represents the motion constraint placed on the cup by the table by restricting its motion to be only along its plane and not penetrating it. This approximation helps in having a succinct and effective representation of the task dynamics; under this approximation, for a specific set of inputs, the relative motion between the two objects in contact will always be the same, independent of the type of contact between them. In this case, the specific set of inputs would be the set of all inputs which do not result in moving the cup away from the table plane, resulting in breaking the contact between them.

In this experiment, we consider the domain to be made up of four distinct local dynamics models: two corresponding to the linear motions along the wing plane edges, one corresponding to the corner of the plane and one to represent free-body motion elsewhere in the domain. At the highest level, the planning problem can be broken down into two steps: first, to localize the gear at a point in a plane parallel to the wing and second, to insert the gear into the hole. A hybrid dynamics model in a plane parallel to the wing can be given as

(14)

where is process noise, modeled as with 1 unit = 1 . The observation function with zero-mean Gaussian observation noise . The planner took seconds for planning on an Intel® CoreTM i7-6700 CPU @3.40GHz, 16Gb RAM. The left panel of Figure 2 shows snapshots of the trajectory executed by the robot during the task from two perpendicular angles. The right Panel of Figure 2 shows the trajectory planned by the hierarchical planner and the actual trajectory taken by the robot in a plane parallel to the wing. It can be see from Fig. 2 that the planner plans to activate the motion constraint parallel to the wing in order to reduce its uncertainty. Once localized in the plane parallel to the wing, the robot changes planes to move to a point directly above the hole and then proceeds to insert the landing gear into the wing.

(a)
(b)
Figure 2: Left Panel: Snapshots of the robot assembling the toy airplane. Right Panel:Planned and Actual trajectories for the airplane assembly task in a plane parallel to wing plane. Bold black lines represents the edges of the airplane wing. 1 unit = 10 cm.

5 Conclusion

Nonlinear task dynamics, especially due to sudden changes in dynamics, can be effectively modelled using a hybrid dynamics model. A hybrid dynamics model consists of a set of local dynamics models, of which only one is active at a time. In this work, we propose a hierarchical POMDP planner for hybrid dynamics which can develop locally optimal motion plans for tasks involving nonlinear dynamics under noisy observations. The proposed planner generates hierarchical motion plans at two levels: first, a high-level motion plan that sequences the local dynamics models to be visited and second, based on the best high-level plan, a detailed continuous state motion plan to be followed by the robot. The hierarchical planning approach breaks the large POMDP problem into multiple smaller segments with shorter planning horizons, which significantly increases the computational efficiency of the planner. High-level planning also enables the robot to leverage task dynamics to improve its performance—for example, reducing uncertainty using the task motion constraints in order to develop motion plans which are more robust to state uncertainty.

In the present work, a hybrid model of task dynamics needs to be provided to the planner by an expert. Hence, a natural extension of this work is to autonomously learn a task hybrid dynamics model. For example, Niekum et al. have proposed methods [29, 30] to learn articulated motion models encountered while manipulating an object. In the future, the proposed POMDP planner may be combined with these methods to develop an end-to-end approach for learning hybrid dynamics models for manipulation tasks and using them to generate plans that are robust to state uncertainty.

This work has taken place in the Personal Autonomous Robotics Lab (PeARL) at The University of Texas at Austin. PeARL research is supported in part by the NSF (IIS-1724157, IIS-1638107, IIS-1617639, IIS-1749204) .

References

  • Thrun [2002] S. Thrun. Probabilistic robotics. Communications of the ACM, 45(3):52–57, 2002.
  • Papadimitriou and Tsitsiklis [1987] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of markov decision processes. Mathematics of operations research, 12(3):441–450, 1987.
  • Pineau et al. [2002] J. Pineau, G. Gordon, and S. Thrun. Policy-contingent abstraction for robust robot control. In

    Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence

    , pages 477–484. Morgan Kaufmann Publishers Inc., 2002.
  • Brunskill et al. [2008] E. Brunskill, L. Kaelbling, T. Lozano-Perez, and N. Roy. Continuous-State POMDPs with Hybrid Dynamics. Symposium on Artificial Intelligence and Mathematics, pages 13–18, 2008.
  • Toussaint et al. [2008] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical pomdp controller optimization by likelihood maximization. In UAI, volume 24, pages 562–570, 2008.
  • Kurniawati et al. [2008] H. Kurniawati, D. Hsu, and W. S. Lee. Sarsop: Efficient point-based pomdp planning by approximating optimally reachable belief spaces. In Robotics: Science and Systems, volume 2008. Zurich, Switzerland, 2008.
  • Shani et al. [2013] G. Shani, J. Pineau, and R. Kaplow. A survey of point-based pomdp solvers. Autonomous Agents and Multi-Agent Systems, 27(1):1–51, 2013.
  • Agha-Mohammadi et al. [2014] A.-A. Agha-Mohammadi, S. Chakravorty, and N. M. Amato. Firm: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements. The International Journal of Robotics Research, 33(2):268–304, 2014.
  • Hollinger and Sukhatme [2014] G. A. Hollinger and G. S. Sukhatme. Sampling-based robotic information gathering algorithms. The International Journal of Robotics Research, 33(9):1271–1287, 2014.
  • Sreenath et al. [2013] K. Sreenath, C. R. Hill Jr, and V. Kumar. A partially observable hybrid system model for bipedal locomotion for adapting to terrain variations. In Proceedings of the 16th international conference on Hybrid systems: computation and control, pages 137–142. ACM, 2013.
  • Van Den Berg et al. [2012] J. Van Den Berg, S. Patil, and R. Alterovitz. Motion planning under uncertainty using iterative local optimization in belief space. The International Journal of Robotics Research, 31(11):1263–1278, 2012.
  • Indelman et al. [2015] V. Indelman, L. Carlone, and F. Dellaert. Planning in the continuous domain: A generalized belief space approach for autonomous navigation in unknown environments. The International Journal of Robotics Research, 34(7):849–882, 2015.
  • Majumdar and Tedrake [2017] A. Majumdar and R. Tedrake. Funnel libraries for real-time robust feedback motion planning. The International Journal of Robotics Research, 36(8):947–982, 2017.
  • Silver and Veness [2010] D. Silver and J. Veness. Monte-carlo planning in large pomdps. In Advances in neural information processing systems, pages 2164–2172, 2010.
  • Kurniawati and Yadav [2016] H. Kurniawati and V. Yadav. An online pomdp solver for uncertainty planning in dynamic environment. In Robotics Research, pages 611–629. Springer, 2016.
  • Elbanhawi and Simic [2014] M. Elbanhawi and M. Simic. Sampling-based robot motion planning: A review. IEEE Access, 2:56–77, 2014.
  • Agha-mohammadi et al. [2014] A.-a. Agha-mohammadi, N. K. Ure, J. P. How, and J. Vian. Health aware stochastic planning for persistent package delivery missions using quadrotors. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages 3389–3396. IEEE, 2014.
  • Dogar and Srinivasa [2012] M. R. Dogar and S. S. Srinivasa. A planning framework for non-prehensile manipulation under clutter and uncertainty. Autonomous Robots, 33(3):217–236, 2012.
  • Kroemer et al. [2015] O. Kroemer, C. Daniel, G. Neumann, H. Van Hoof, and J. Peters. Towards learning hierarchical skills for multi-phase manipulation tasks. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 1503–1510. IEEE, 2015.
  • Lygeros et al. [2012] J. Lygeros, S. Sastry, and C. Tomlin. Hybrid systems: Foundations, advanced topics and applications. under copyright to be published by Springer Verlag, 2012.
  • Ghahramani and Hinton [2000] Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural computation, 12(4):831–864, 2000.
  • Jr et al. [2010] R. P. Jr, R. Tedrake, L. Kaelbling, and T. Lozano-Perez. Belief space planning assuming maximum likelihood observations. Robotics: Science and Systems, 2010. ISSN 2330765X.
  • Gill et al. [2005] P. E. Gill, W. Murray, and M. A. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Rev., 47:99–131, 2005.
  • Gill et al. [2017] P. E. Gill, W. Murray, M. A. Saunders, and E. Wong. User’s guide for SNOPT 7.6: Software for large-scale nonlinear programming. Center for Computational Mathematics Report CCoM 17-1, Department of Mathematics, University of California, San Diego, La Jolla, CA, 2017.
  • Gulyás et al. [2011] L. Gulyás, G. Horváth, T. Cséri, and G. Kampis. An estimation of the shortest and largest average path length in graphs of given density. arXiv preprint arXiv:1101.2549, 2011.
  • Cha [2007] S.-H. Cha.

    Comprehensive survey on distance/similarity measures between probability density functions.

    City, 1(2):1, 2007.
  • Storn and Price [1997] R. Storn and K. Price. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4):341–359, 1997.
  • Calli et al. [2017] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar. Yale-cmu-berkeley dataset for robotic manipulation research. The International Journal of Robotics Research, 36(3):261–268, 2017.
  • Niekum et al. [2015] S. Niekum, S. Osentoski, C. G. Atkeson, and A. G. Barto. Online bayesian changepoint detection for articulated motion models. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 1468–1475. IEEE, 2015.
  • Hausman et al. [2015] K. Hausman, S. Niekum, S. Osentoski, and G. Sukhatme. Active articulation model estimation through interactive perception. In IEEE International Conference on Robotics and Automation, 2015.

Appendix A Preliminaries

a.1 Trajectory Optimization using Direct Transcription

Direct Transcription is a trajectory optimization method in which a constrained nonlinear optimization problem is set up with the user-defined objective function over a set of knot-points , ) chosen to discretize the continuous space trajectory into a set of decision variables. The system dynamics are imposed as the constraints on the optimization problem. For discrete-time systems, these knot-points can be taken as the system state and the control input at each time step . However, planning for longer horizons will then require specifying a high number of knot-points (, ) which can result in very high computational costs. This can be resolved by approximately parameterizing the space of possible trajectories by a series of segments and solving the optimization problem for a knot points only at the start and end points of segments. The intermediate points on the trajectory can be obtained via numerical integration. Let and be sets of state and action variables that parameterize the trajectory in terms of segments. The segment can be assumed to start at time and ends at time , where for a time horizon .

A general objective function for trajectory optimization can be given as

(15)

where and represent the cost matrices associated with the state and the input respectively. The system dynamics incorporated as constraints can be defined as:

(16)

where the function can be seen as performing numerical integration of the current state variable till the next state variable . The function is given as

(17)

where represents the system dynamics.

Trajectory optimization using direct transcription can be extended for belief space planning by assuming Gaussian noise over continuous states [22]. If the belief over continuous states is defined as , trajectory optimization can be formulated as an optimization problem over variables and , where represents the mean of the belief state and is a vector composed of columns of . Analogous to the deterministic case, problem is constrained to follow belief space dynamics. The corresponding objective function can be given as

(18)

where , and represent the cost matrices associated with belief mean, control input and the belief covariance at final discrete time step respectively. Belief dynamics can be incorporated in the formulation as the constraints:

(19)

where the function is given as

(20)

where represents extended system dynamics. Propagation of belief through system dynamics has been previously discussed by Platt et al. [22] in further details.

Appendix B Further Experimental Details

b.1 Domain-I

Matrices defining the cost function over error in states, control input, additional cost for final state error and covariance were taken as , , and respectively. Number of Gaussians used to model continuous belief .

b.2 Domain-II

Feedback was obtained on the location of the airplane in the world frame by doing an online color-based object cluster extraction, using multi-plane segmentation from the Point Cloud Library (PCL) on the point cloud data of a Microsoft Kinect v2 sensor. Matrices defining the cost function over error in states, control input, additional cost for final state error and covariance were taken as , , and respectively. Number of Gaussians used to model continuous belief .