# Hit-and-Run for Sampling and Planning in Non-Convex Spaces

We propose the Hit-and-Run algorithm for planning and sampling problems in non-convex spaces. For sampling, we show the first analysis of the Hit-and-Run algorithm in non-convex spaces and show that it mixes fast as long as certain smoothness conditions are satisfied. In particular, our analysis reveals an intriguing connection between fast mixing and the existence of smooth measure-preserving mappings from a convex space to the non-convex space. For planning, we show advantages of Hit-and-Run compared to state-of-the-art planning methods such as Rapidly-Exploring Random Trees.

## Authors

• 23 publications
• 42 publications
• 5 publications
• 5 publications
• ### Perturbed Proximal Descent to Escape Saddle Points for Non-convex and Non-smooth Objective Functions

We consider the problem of finding local minimizers in non-convex and no...
01/24/2019 ∙ by Zhishen Huang, et al. ∙ 0

• ### Global Non-convex Optimization with Discretized Diffusions

An Euler discretization of the Langevin diffusion is known to converge t...
10/29/2018 ∙ by Murat A. Erdogdu, et al. ∙ 0

• ### IPBoost – Non-Convex Boosting via Integer Programming

Recently non-convex optimization approaches for solving machine learning...
02/11/2020 ∙ by Marc E. Pfetsch, et al. ∙ 0

• ### Rapidly-Exploring Quotient-Space Trees: Motion Planning using Sequential Simplifications

Motion planning problems can be simplified by admissible projections of ...
06/04/2019 ∙ by Andreas Orthey, et al. ∙ 8

• ### Regularized deep learning with non-convex penalties

Regularization methods are often employed in deep learning neural networ...
09/11/2019 ∙ by Sujit Vettam, et al. ∙ 0

• ### A Social Spider Algorithm for Solving the Non-convex Economic Load Dispatch Problem

Economic Load Dispatch (ELD) is one of the essential components in power...
07/27/2015 ∙ by James J. Q. Yu, et al. ∙ 0

• ### Pushing the Boundaries of Asymptotic Optimality for Sampling-based Roadmaps In Motion And Task Planning

Sampling-based roadmaps have been popular methods for robot motion and t...
03/03/2019 ∙ by Rahul Shome, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Rapidly-Exploring Random Trees (RRT) (LaValle-1998, LaValle-Kuffner-2001) is one of the most popular planning algorithms, especially when the search space is high-dimensional and finding the optimal path is computationally expensive. RRT performs well on many problems where classical dynamic programming based algorithms, such as A*, perform poorly. RRT is essentially an exploration algorithm, and in the most basic implementation, the algorithm even ignores the goal information, which seems to be a major reason for its success. Planning problems, especially those in robotics, often feature narrow pathways connecting large explorable regions; combined with high dimensionality, this means that finding the optimal path is usually intractable. However, RRT often provides a feasible path quickly.

Although many attempts have been made to improve the basic algorithm (AbbasiYadkori-Modayil-Szepesvari-2010, Karaman-Frazzoli-2010, Karaman-Frazzoli-2011), RRT has proven difficult to improve upon. In fact, given extra computation, repeatedly running RRT often produces competitive solutions. In this paper, we show that a simple alternative greatly improves upon RRT. We propose using the Hit-and-Run algorithm for feasible path search. Arguably simpler than RRT, the Hit-and-Run is a rapidly mixing MCMC sampling algorithm for producing a point uniformly at random from a convex space (Smith-1984). color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: not that clear, you mean hnr is simpler that RRT? insist maybe more on what still need to be improved from RRT or say that you will detail later Not only Hit-and-Run finds a feasible path faster than RRT, it is also more robust with respect to the geometry of the space.

Before giving more details, we define the planning and sampling problems that we consider. Let be a bounded connected subset of . For points , we use to denote their (one-dimensional) convex hull. Given a starting point and a goal region , the planning problem is to find a sequence of points for such that all points are in , is in , and for , .

The sampling problem is to generate points uniformly at random from

. Sampling is often difficult, but Markov Chain Monte Carlo (MCMC) algorithms have seen empirical and theoretical success

(lovasz2007geometry). MCMC algorithms, such as Hit-and-Run and Ball-Walk (VEMPALA-2005), sample a Markov Chain on

that has a stationary distribution equal to the uniform distribution on

; then, if we run the Markov Chain long enough, the marginal distribution of the sample is guaranteed to come from a distribution exponentially close to the target distribution. Solving the sampling problem yields a solution to the planning problem; one can generate samples and terminate when hits .

Let us define Hit-and-Run and the RRT algorithms (see also Figure 1 for an illustration). Hit-and-Run defines a Markov chain on where the transition dynamics are as follows. A direction is chosen uniformly at random, and is chosen uniformly from the largest chord contained in in this direction passing through . This Markov Chain has a uniform stationary distribution on (Smith-1984). As a planning algorithm, this chain continues until it hits the goal region. Let be the stopping time. The solution path is .

On the other hand, the RRT algorithm iteratively builds a tree with as a root and nodes labeled as and edges that satisfy . To add a point to the tree, is uniformly sampled from and its nearest neighbor is computed. If , then node and edge are added to . Otherwise, we search for the point farthest from such that . Then and are added to the tree. This process is continued until we add an edge terminating in and the sequence of points on that branch is returned as the solution path. In the presence of dynamic constraints, a different version of RRT that makes only small local steps is used. These versions will be discussed in the experiments section.

There are two main contributions on this paper. First, we analyze the Hit-and-Run algorithm in a non-convex space and show that the mixing time is polynomial in dimensionality as long as certain smoothness conditions are satisfied. The mixing time of Hit-and-Run for convex spaces is known to be polynomial (Lovasz-1999). However, to accommodate planning problems, we focus on non-convex spaces. Our analysis reveals an intriguing connection between fast mixing and the existence of smooth measure-preserving mappings. The only existing analysis of random walk algorithms in non-convex spaces is due to Chandrasekaran-Dadush-Vempala-2010 who analyzed Ball-Walk in star-shaped bodies.111We say is star-shaped if the kernel of , define by , is nonempty. Second, we propose Hit-and-Run for planning problems as an alternative to RRT and show that it finds a feasible path quickly. From the mixing rate, we obtain a bound on the expected length of the solution path in the planning problem. Such performance guarantees are not available for RRT.

The current proof techniques in the analysis of Hit-and-Run heavily rely on the convexity of the space. It turns out that non-convexity is specially troubling when points are close to the boundary. We overcome these difficulties as follows. First, Lovasz-Vempala-2006-corner show a tight isoperimetic inequality in terms of average distances instead of minimum distances. This enables us to ignore points that are sufficiently close to the boundary. Next we show that as long as points are sufficiently far from the boundary, the cross-ratio distances in the convex and non-convex spaces are closely related. Finally we show that, given a curvature assumption, if two points are close geometrically and are sufficiently far from the boundary, then their proposal distributions must be close as well. color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: what is a proposal distribution? color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: theoretical results for RRT in the litterature?

Hit-and-Run

has a number of advantages compared to RRT; it does not require random points sampled from the space (which is itself a hard problem), and it is guaranteed to reach the goal region with high probability in a polynomial number of rounds. In contrast, there are cases where RRT growth can be very slow (see the experiments sections for a discussion). Moreover,

Hit-and-Run provides safer solutions, as its paths are more likely to stay away from the boundary. In contrast, a common issue with RRT solutions is that they tend to be close to the boundary. Because of this, further post-processing steps are needed to smooth the path.

color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: insist more on the fact that you do not change hitand run or propose a new version just show that the original algo is the solution and you provide the analysis which is cool

### 1.1 Notation

For a set , we will denote the -dimensional volume by , the -dimensional surface volume by , and the boundary by . The diameter of is , where will be used for absolute value and Euclidean norm, and the distance between sets and is defined as . Similarly, . For a set , we use to denote . Finally, for distributions and , we use to denote the total variation distance between and .

We will also need some geometric quantities. We will denote lines (i.e., 1-dimensional affine spaces) by . For , we denote their convex hull, that is, the line segment between them, by and the line that passes through and (which contains ). We also write to denote that are collinear.

We also use to denote the longest connected chord through and contained in and its length. We use and to denote the endpoints of that are closer to and , respectively, so that . The Euclidean ball of unit radius centered at the origin, , has volume . We use to denote the sequence . Finally we use to denote .

## 2 Sampling from Non-Convex Spaces

Most of the known results for the sampling times of the Hit-and-Run exist for convex sets only. We will think of as the image of some convex set under a measure preserving, bilipschitz function . The goal is to understand the relevant geometric quantities of through properties of and geometric properties of . We emphasize that the existence of the map and its properties are necessary for the analysis, but the actual algorithm does not need to know . We formalize this assumption below as well as describe how we interact with and present a few more technical assumptions required for our analysis. We then present our main result, and follow that with some conductance results before moving on to the proof of the theorem in the next section.

###### Assumption 1 (Oracle Access).

Given a point and a line that passes through , the oracle returns whether , and, if so, the largest connected interval in containing .

###### Assumption 2 (Bilipschitz Measure-Preserving Embeddings).

There exist a convex set and a bilipschitz, measure-preserving map such that is the image of under . That is, there exists a function with (i.e. the Jacobian has unit determinant) with constants and such that, for any ,

 1LΩ|x−y|≤|g(x)−g(y)|≤LΣ|x−y|.

In words, is measure-preserving, is -Lipschitz, and is -Lipschitz.

As an example, Fonseca-Parry-1992 shows that for any star-shaped space, a smooth measure-preserving embedding exists. One interesting consequence of Assumption 2 is that because the mapping is measure-preserving, there must exist a pair such that . Otherwise, , a contradiction. Similarly, there must exist a pair such that . Thus,

 LΩ,LΣ≥1. (1)

To simplify the analysis, we will assume that is a ball with radius . In what follows, we use to denote points in , and to denote points in . We will also assume that has no sharp corners and has a smooth boundary:

###### Assumption 3 (Low Curvature).

For any two dimensional plane , let be the curvature of and be the perimeter of . We assume that has low curvature, i.e. that is finite.

Assumption 2 does not imply low curvature, as there exist smooth measure-preserving mappings from the unit ball to a cube (Griepentrog-Hoppner-Kaiser-Rehberg-2008).

###### Assumption 4.

We assume that the volume of is equal to one. We also assume that contains a Euclidean ball of radius one.

Note that the unit ball has volume less than 1 for , so for small dimensional problems, we will need to relax this assumption.

We motivate the forthcoming technical machinery by demonstrating what it can accomplish. The following theorem is the main result of the paper, and the proof makes up most of Section 3.

###### Theorem 5.

Consider the Hit-and-Run algorithm. Let be the distribution of the initial point given to Hit-and-Run, be the distribution after steps of Hit-and-Run, and be the stationary distribution (which is uniform). Let . Let be a positive scalar. After

 t≥C′n6logMϵ

steps, we have . Here is a low order polynomial of .

## 3 Analysis

This section proves Theorem 5. We begin by stating a number of useful geometrical results, which allow us to prove the two main components: an isoperimetric inequality in Section 3.2 and a total variation inequality in Section 3.3. We then combine everything in Section 3.4.

### 3.1 Fast Mixing Markov Chains

We rely on the notion of conductance color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: reference? as our main technical tool. This section recalls the relevant results.

We say that points see each other if . We use to denote all points in visible from . Let denote the chord through and inside and its length. Let be the probability of being in set after one step of Hit-and-Run from and its density function. By an argument similar to the argument in Lemma 3 of Lovasz-1999, we can show that

 fu(v)=21{v∈\textscview(u)}nπn|ℓΣ(u,v)|⋅|u−v|n−1. (2)

The conductance of the Markov process is defined as

 Φ=infA⊂Σ∫APu(Σ∖A)dumin(\textscvol(A),\textscvol(Σ∖A)).

We begin with a useful conductance result that applies to general Markov Chains.

###### Lemma 6 (Corollary 1.5 of Lovasz-Simonovits-1993).

Let . Then for every ,

 |σt(A)−σ(A)|≤√M(1−Φ22)t.

Proving a lower bound on the conductance is therefore a key step in the mixing time analysis. Previous literature has shown such lower bounds for convex spaces. Our objective in the following is to obtain such bounds for more general non-convex spaces that satisfy bilipschtiz measure-preserving embedding and low curvature assumptions. color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Alan: Maybe Add a comment about how this part of the paper applies for general Markov chains even though they are studying random walks on a convex body.

As in previous literature, we shall find that the following cross-ratio distance is very useful in deriving an isoperimetric inequality and a total variation inequality.

###### Definition 1.

Let be collinear and inside , such that . Define

 dΣ(u,v)=|a−b||u−v||a−u||v−b|.

It is easy to see that . We define the following distance measure for non-convex spaces.

###### Definition 2.

A set will be called -best if, for any , there exist points such that , and for are all in ; i.e., any two points in can be connected by line segments that are all inside . We define the distance

 ˜dΣ(u,v) =infz1:τ−1∈Σ(dΣ(u,z1)+dΣ(z1,z2)+⋯+dΣ(zτ−1,v)),

and, by extension, the distance between two subsets as .

The analysis of the conductance is often derived via an isoperimetric inequality.

###### Theorem 7 (Theorem 4.5 of Vempala-2005).

Let be a convex body in . Let be an arbitrary function. Let be any partition of into measurable sets. Suppose that for any pair of points and and any point on the chord of through and , . Then

 \textscvol(Ω3)≥EΩ(h)min(\textscvol(Ω1),\textscvol(Ω2)),

where the expectation is defined with respect to the uniform distribution on .

Given an isoperimetric inequality, a total variation inequality is typically used in a mixing time analysis to lower bound cross-ratio distances and then lower bound the conductance. Our approach is similar. We use the embedding assumption to derive an isoperimetric inequality in the non-convex space . Then we relate cross-ratio distance to distance . This approximation is good when the points are sufficiently far from the boundary. We incur a small error in the mixing bound by ignoring points that are too close to the boundary. Finally we use the curvature condition to derive a total variation inequality and to lower bound the conductance.

### 3.2 Cross-Ratio Distances

The first step is to show the relationship between cross-ratio distances in the convex and non-convex spaces. We show that these distances are close as long as points are far from the boundary. These results will be used in the proof of the main theorem in Section 3.4 to obtain an isoperimetric inequality in the non-convex space. First we define a useful quantity.

###### Definition 3.

Consider a convex set with some subset and collinear points with , , and . Let be a point on . Let . We use to denote the maximum of over all such points. We use to denote .

The following lemma is the main technical lemma, and we use it to express in terms of .

###### Lemma 8.

Let be a positive scalar such that . Let be collinear such that and are on the boundary of , , and . Let and be two points on the boundary of . Then

 |b−a||a−x1|⋅|x1−c||c−d|⋅|d−x2||x2−b|≥14Rϵ(1+2Rϵ).
###### Proof.

Let

We prove the claim by proving that .
Case 1, : In this case, and are both on the line segment . We consider two cases.
Case 1.1, : We have that

 |c−d|=|c−x1|+|x1−x2|+|x2−d|≤|c−x1|+|x1−b|+|x2−b|. (3)

Because by the assumption of the lemma, we have . Also because in Case 1, we have . Thus

 (4)

By (3) and (4),

 |c−d||c−x1|≤1+|x1−b||c−x1|+|x2−b||c−x1|≤1+Rϵ+|x2−b||c−x1|≤1+2Rϵ.

We use also that by definition of . This and the previous result lets us bound

 B ≤(1+2Rϵ)|x1−x2||x2−d|, A ≥|b−a||x1−x2|Rϵ|a−x1||x2−d|≥|x1−x2|Rϵ|x2−d|,

and conclude

 AB≥1Rϵ(1+2Rϵ)≥14Rϵ(1+2Rϵ).

Case 1.2, :
Case 1.2.1, : We have that . Thus,

 AB ≥|a−b||x1−x2|BRϵ|a−x1||x1−c| =|a−b|Rϵ|a−x1|⋅|d−x2||c−d|≥|d−x2|Rϵ|c−d| ≥|d−x2|Rϵ(|d−x2|+|x2−x1|+|x1−c|) ≥|d−x2|Rϵ(|d−x2|+(1+Rϵ)|x1−c|) =1Rϵ(1+(1+Rϵ)|x1−c||d−x2|)≥1Rϵ(2+Rϵ) ≥14Rϵ(1+2Rϵ).

Case 1.2.2, : As before, we bound and separately:

 B ≤|c−d||x1−x2||c−x1||x2−b| ≤|x1−x2||x2−b|⋅|c−x1|+Rϵ|c−x1|+|x2−d||c−x1| ≤(2+Rϵ)|x1−x2||x2−b|,

and

 A=|b−a||x1−x2||a−x1||x2−b|≥|x1−x2||x2−b|.

Putting these together,

 AB≥12+Rϵ≥14Rϵ(1+2Rϵ),

where the second inequality holds because .
Case 2, and : In this case, and are on opposite sides of the point . Let be a positive constant. We will choose later.
Case 2.1, : We bound

 B≤M|x1−x2||x2−d|≤MRϵ|x1−x2||x2−b|

and conclude

 AB≥|a−b|MRϵ|a−x1|≥1MRϵ≥14Rϵ(1+2Rϵ).

Case 2.2, :
Case 2.2.1, :
We have that

 AB≥1MR2ϵ≥14Rϵ(1+2Rϵ).

Case 2.2.2, : Let be a point on the line segment . Let be the angle between line segments and . We write

 |c−x0|2 =|x1−x0|2+|x1−c|2 −2|x1−c|⋅|x1−x0|cosβ1 ≤1M2|c−d|2+1M2|c−d|2+2M2|c−d|2 =4M2|c−d|2.

By the triangle inequality,

 |d−x0|≥|c−d|−|c−x0|≥(1−2M)|c−d|.

Let be the angle between line segments and . Let . We write

 w2|c−d|2 ≤|d−x0|2 =|x2−x0|2+|x2−d|2−2|x2−d|⋅|x2−x0|cosβ2 ≤1M2|c−d|2+|x2−d|2+2M|d−x2|⋅|c−d|.

Thus,

 |x2−d|2+2M|d−x2|⋅|c−d|+(4M−3M2−1)|c−d|2≥0,

which is a quadratic inequality in . Thus it holds that

 |x2−d|≥(−1M+∣∣∣2M−1∣∣∣)|c−d|.

If we choose , then and

 B≤4|x1−x2||x1−c|≤4Rϵ|x1−x2||x1−a|,

yielding

 AB≥|a−b|4Rϵ|b−x2|≥14Rϵ≥14Rϵ(1+2Rϵ).

Finally, observe that Case 3 follows by symmetry from Case 1. ∎

The following lemma states that the distance does not increase by adding more steps.

###### Lemma 9.

Let be in the convex body such that the points are collinear. Further assume that . We have that

 dΩ(y1,y2)+⋯+dΩ(ym−1,ym)≤dΩ(y1,ym).
###### Proof.

We write

 dΩ(y1,ym) =|a−b||y1−ym||a−y1||ym−b| =|a−b||y1−y2||a−y1||ym−b|+|a−b||y2−y3||a−y1||ym−b|+⋯+|a−b||ym−1−ym||a−y1||ym−b| ≥|a−b||y1−y2||a−y1||y2−b|+|a−b||y2−y3||a−y2||y3−b|+⋯+|a−b||ym−1−ym||a−ym−1||ym−b| =dΩ(y1,y2)+⋯+dΩ(ym−1,ym).

The next lemma upper bounds in terms of .

###### Lemma 10.

Let . We have that

 ˜dΣ(g(x1),g(x2))≤4L2ΣL2ΩRϵ(1+2Rϵ)dΩ(x1,x2).
###### Proof.

First we prove the inequality for the case that . Let be such that the points are collinear. Let be points such that the points are collinear and the line connecting and is inside . By the Lipschitzity of and and Lemma 8,

 ˜dΣ(g(x1),g(x2)) =|g(c)−g(d)||g(x1)−g(x2)||g(c)−g(x1)||g(x2)−g(d)| ≤L2ΣL2Ω|c−d||x1−x2||c−x1||x2−d| ≤L2ΣL2Ω4Rϵ(1+2Rϵ)|a−b||x1−x2||a−x1||x2−b| =4L2ΣL2ΩRϵ(1+2Rϵ)dΩ(x1,x2). (5)

Now consider the more general case where . Find a set of points such that the line segments are all inside . By definition of , (5), and Lemma 9, can be upper bounded by

 infu1:τ−1∈Σ dΣ(g(x1),u1)+dΣ(u1,u2)+⋯+dΣ(uτ−1,g(x2)) ≤dΣ(g(x1),g(y1))+dΣ(g(y1),g(y2))+⋯+dΣ(g(yτ−1),g(x2)) ≤4L2ΣL2ΩRϵ(1+2Rϵ)(dΩ(x1,y1)+dΩ(y1,y2)+⋯+dΩ(yτ−1,x2)) ≤4L2ΣL2ΩRϵ(1+2Rϵ)dΩ(x1,x2)

### 3.3 Total Variation Inequality

In this section, we show that if two points are close to each other, then and are also close. First we show that if the two points are close to each other, then they have similar views.

###### Lemma 11 (Overlapping Views).

Given the curvature defined in Assumption 3, for any such that ,

 Pu({x:x∉\textscview(v)})≤max(4π,κsin(π/8))ϵ′ϵ.

The proof is in Appendix A. Next we define some notation and show some useful inequalities. For , let be a random point obtained by making one step of Hit-and-Run from . Define by . If , less than of any chord passing through is inside . Thus , which implies

 F(u)≥h16. (6)

Intuitively, the total variation inequality implies that if and are close geometrically, then their proposal distributions must be close as well.

###### Lemma 12.

Let be two points that see each other. Let . Suppose that

 dΣ(u,v)<ϵ24DΣ and |u−v|

Then,

 |Pu−Pv|<1−ϵ8e4DΣ.

The proof is in Appendix A. The proof uses ideas from proof of Lemma 9 of Lovasz-1999. The proof of Lovasz-1999 heavily relies on the convexity of the space, which does not hold in our case. We overcome the difficulties using the low curvature assumption and the fact that and are sufficiently far from the boundary.

### 3.4 Putting Everything Together

Next we bound the conductance of Hit-and-Run.

###### Lemma 13.

Let

 δ=9r320e4nLΩDΣ,G=16min(π4,sin(π/8)κ),ϵ′=9r20n,N=9r80nL2ΣL3ΩRϵ′(1+2Rϵ′),

where is the radius of ball (so ). The conductance of Hit-and-Run is at least

 δ4(25nDΩ∧N(124DΣ∧2√n(18√n∧G))).

The proof is in Appendix A. In proving this lemma, the non-convexity of is specially troubling when points are close to the boundary. We overcome this difficulty by using the isoperimetric inequality shown in Theorem 7, which is in terms of average distances instead of minimum distances. This enables us to ignore points that are very close to the boundary.

If we treat as constants and collect all constants in , we have a lower bound for the conductance. Now we are ready to prove the main theorem.

###### Proof of Theorem 5.

Using Lemma 6 and Lemma 13, , which gives the final bound after rearrangement. ∎

## 4 Planning

This section makes an empirical argument for use of the Hit-and-Run

in trajectory planning. In the first of two experiments, the state space is a position vector constrained to some map illustrated by the bottom plots of Figure

2. The second experiment also includes two dimensions of velocity in the state and limits state transitions to those that respect the map as well as kinematics and requires the planning to control the system explicitly (by specifying an acceleration vector for every time step). We will show that Hit-and-Run outperforms RRT in both cases by requiring fewer transitions to reach the goal state across a wide variety of map difficulties.

### 4.1 Position only

The state starts at the bottom left of the spiral and the goal is the top right. Both algorithms are implemented as described in the introduction. The number of tranitions needed to reach the goal of both algorithms is plotted as a function of the width of the spiral arms; the larger the width, the easier the problem.

The results are presented in Figure 2. The top plot show the number of transitions needed by both algorithms as the width of the arms changes, averaged over 500 independent runs. We see that the Hit-and-Run outperforms RRT for all but the hardest problems, usually by a large margin. The two lower plots show the sample points produced from one run with width equal to 1.2; we see that RRT has more uniform coverage, but that Hit-and-Run has a large speedup over linear sections, therefore justifying its faster exploration.

RRT is slow in this problem because in many rounds the tree does not grow in the right direction. For example at the beginning the tree needs to grow upwards, but most random samples will bias the growth to right. As Hit-and-Run only considers the space that is visible to the current point, it is less sensitive to the geometry of the free space. We can make this problem arbitrarily hard for RRT by making the middle part of the spiral fatter. Hit-and-Run, on the other hand, is insensitive to such changes. Additionally, the growth of the RRT tree can become very slow towards the end. This is because the rest of the tree absorbs most samples, and the tree grows only if the random point falls in the vicinity of the goal.

### 4.2 Kinematic Planning

In this set of simulations, we constrain the state transitions to adhere to the laws of physics: the state propagates forward under kinematics until it exits the permissible map, in which case it stops inelastically at the boundary. The position map is the two-turn corridor, illustrated in the bottom plots of Figure 3. Both algorithms propose points to in the analogous manner to the previous section (where a desired speed is sampled in addition to a desired position); then, the best acceleration vector in the unit ball is calculated and the sample is propagated forward by the kinematics. If the sample point encounters the boundary, the velocity is zeroed. Both RRT and Hit-and-Run are constrained to use the same controller and the only difference is what points are proposed.

We see that Hit-and-Run again outperforms RRT across a large gamut of path widths by as much as a factor of three. The bottom two plots are of a typical sample path, and we see that Hit-and-Run has two advantages: it accelerates down straight hallways, and it samples more uniformly from the state space. In contrast, RRT wastes many more samples along the boundaries.

## 5 Conclusions and Future Work

This paper has two main contributions. First, we use a measure-preserving bilipschitz map to extend the analysis of the Hit-and-Run random walk to non-convex sets. Mixing time bounds for non-convex sets open up many applications, for example non-convex optimization via simulated annealing and similar methods. The second contribution of this paper has been to study one such application: the planning problem.

In contrast to RRT, using Hit-and-Run for planning has stronger guarantees on the number of samples needed and faster convergence in some cases. It also avoids the need for a sampling oracle for , since it combines the search with an approximate sampling oracle. One drawback is that the sample paths for Hit-and-Run have no pruning and are therefore longer than the RRT paths. Hybrid approaches that yield short paths but also explore quickly are a promising future direction.

## Appendix A Proofs

###### Proof of Lemma 11.

We say a line segment is not fully visible from a point if there exists a point on the line segment that is not visible from . We denote this event by . Let be a line segment chosen by Hit-and-Run from . So, as the next point in the Markov chain, Hit-and-Run chooses a point uniformly at random from . We know that

 Pu({x:x∉\textscview(v)})≤Pu({L:L∉\textscview(v)}),

So it suffices to show

 Pu({L:L∉\textscview(v)})≤max(4π,κsin(π/8))ϵ′ϵ. (7)

To sample the line segment , first we sample a random two dimensional plane containing and , and then sample the line segment inside this plane. To prove (7), we show that in any two dimensional plane containing and , the ratio of invisible to visible region is bounded by .color=blue!20!white,color=blue!20!white,todo: color=blue!20!white,Victor: be sure the plane argument is not fishy

Consider the geometry shown in Figure 4(a). Let be the intersection of and a two dimensional plane containing and . For a line and points and , we write to denote that and a small neighborhood of on are on the opposite sides of . For example, in Figure 4(a), we have that . Define a subset

 Q={q∈H:ℓ(v,q) is tangent to H at q% and [q,ℓ(v,q),u]}.

Any line such that