Rectangular Pyramid Partitioning using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation

03/02/2020 ∙ by Nathan Bucki, et al. ∙ berkeley college 0

We present a novel multicopter trajectory planning algorithm (RAPPIDS) that is capable of quickly finding local collision-free trajectories given a single depth image from an onboard camera. The algorithm leverages a new pyramid-based spatial partitioning method that enables rapid collision detection between candidate trajectories and the environment. Due to its efficiency, the algorithm can be run at high rates on computationally constrained hardware, evaluating thousands of candidate trajectories in milliseconds. The performance of the algorithm is compared to existing collision checking methods in simulation, showing our method to be capable of evaluating orders of magnitude more trajectories per second. Experimental results are presented showing a quadcopter quickly navigating a previously unseen cluttered environment by running the algorithm on an ODROID-XU4 at 30 Hz.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The ability to perform high-speed flight in cluttered, unknown environments can enable a number of useful tasks, such as the navigation of a vehicle through previously unseen areas and rapid mapping of new environments. Many existing planning algorithms for navigation in unknown environments have been developed for multicopters, and can generally be classified as map-based algorithms, memoryless algorithms, or a mixture of the two.

Map-based algorithms first fuse sensor data into a map of the surrounding environment, and then perform path planning and collision checking using the stored map. For example, [1] uses a local map to solve a nonconvex optimization problem that returns a smooth trajectory which remains far from obstacles. Similarly, [2], [3], [4], and [5] each find a series of convex regions in the free-space of a dynamically updated map, and then use optimization-based methods to find a series of trajectories through the convex regions. Although these methods are generally able to avoid getting stuck in local minima (e.g. a dead end at the end of a hallway) due to the use of a reasonably large map, they generally require long computation times to fuse recent sensor data into the map.

Memoryless algorithms, however, only use the latest sensor measurements for planning. For example, [6] and [7] both use depth images to perform local planning by organizing the most recently received depth data into a k-d tree, using the k-d tree to perform collision checking on a number of candidate trajectories, and then choosing the optimal collision-free candidate trajectory to track. A different memoryless algorithm is presented in [8] which does not first convert the depth image into a point cloud, but instead inflates obstacles in the image based on the size of the vehicle and plans directly in image space. In [9], a significant portion of computation is performed offline in order to speed up online collision checking. The space around the vehicle is first split into a grid, a finite set of candidate trajectories are generated, and the grid cells with which each trajectory collides are then computed. Thus, when flying the vehicle, if an obstacle is detected in a given grid cell, the corresponding trajectories can be quickly determined to be in collision. However, such offline methods have the disadvantage of constraining the vehicle to a less expressive set of possible candidate trajectories, e.g. forcing the vehicle to only travel at a single speed.

Several algorithms also leverage previously gathered data while handling the latest sensor measurements separately, allowing for more collision-free trajectories to be found than when using memoryless methods while maintaining a fast planning rate. In [10] a number of recent depth images are used to find the minimum-uncertainty view of a queried point in space, essentially giving the vehicle a wider field of view for planning. In [11] the most recent depth image is both organized into a k-d tree and fused into a local map, allowing for rapid local planning in conjunction with slower global planning.

Although the previously discussed planning algorithms have been shown to perform well in complex environments, they typically require the use of an onboard computer with processing power roughly equivalent to a modern laptop computer. This requirement can significantly increase the cost, weight, and power consumption of a vehicle compared to one with more limited computational resources. We address this problem by introducing a novel planning algorithm which uses a new spatial partitioning and collision checking method to find collision-free trajectories through the environment at low computational cost, enabling rapid path planning on vehicles with significantly lower computational resources than previously developed systems.

The algorithm, classified as a memoryless algorithm, takes the latest vehicle state estimate and a single depth image taken from onboard the vehicle as input. The depth image is used to generate a number of rectangular pyramids that approximate the free space in the environment. As described in later sections, the use of pyramids in conjunction with a continuous-time representation of the vehicle trajectory allows for any given trajectory to be efficiently labeled as either remaining in the free space, i.e. inside the generated pyramids, or as in collision with the environment. Thus, a large number of candidate trajectories can be quickly generated and checked for collisions, allowing for the lowest cost trajectory, as specified by a user provided cost function, to be chosen for tracking until the next depth image is captured. Furthermore, by choosing a continuous-time representation of the candidate trajectories, each trajectory can be quickly checked for input feasibility using existing methods.

The use of pyramids to approximate the free space is advantageous because they can be created efficiently by exploiting the structure of a depth image, they can be generated on an as-needed basis (avoiding the up-front computation cost associated with other spatial partitioning data structures such as k-d trees), and because they inherently prevent occluded/unknown space from being marked as free space. Additionally, because our method is a memoryless method rather than a map-based method, it is robust to common mapping errors resulting from poor state estimation (e.g. incorrect loop closures).

Ii System Model and Relevant Properties

In this section we describe the form of the trajectories used for planning and several of their relevant properties. These properties are later exploited to perform efficient collision checking between the trajectories and the environment.

We assume the vehicle is equipped with sensors capable of producing depth images that can be modeled using the standard pinhole camera model with focal length . Let the depth-camera-fixed frame be located at the focal point of the image with z-axis perpendicular to the image plane. The point at position written in the depth-camera-fixed frame is then located pixels horizontally and pixels vertically from the image center with depth value .

Ii-a Trajectory and Collision Model

We follow [12] and [13] in modeling the desired position trajectory of the center of mass of the vehicle as a fifth order polynomial in time, which corresponds to the minimum average jerk trajectory between two states of the vehicle. Let , , and denote the position, velocity, and acceleration of the center of mass of the vehicle relative to a fixed point in an inertial frame. The desired position trajectory is then defined by the initial state of the vehicle, defined by , , and , the duration of the trajectory , and the desired state of the vehicle at the end of the trajectory, defined by , , and . A desired position trajectory of the vehicle can then be written as

(1)

where , , and are constants that depend only on , , , and . Note that the thrust direction of the vehicle (and thus its roll and pitch) is defined by its acceleration . We refer the reader to [12] for details regarding this relation, as well as methods for quickly checking whether constraints on the minimum and maximum thrust and magnitude of the angular velocity of the multicopter are satisfied throughout the duration of the trajectory. We define as a collision-free trajectory if a sphere centered at that contains the vehicle does not intersect with any obstacles for the duration of the trajectory.

We additionally define a similar trajectory with initial position coincident with the depth-camera-fixed frame such that

(2)

The trajectory is used for collision checking rather than directly using the trajectory of the center of mass because originates at the focal point of the depth image, allowing for the use of the advantageous properties of described in the following subsections. Let be a sphere centered at with radius that contains the sphere . If the larger sphere does not intersect with any obstacles for the duration of the trajectory, we can then also verify that the sphere containing the vehicle remains collision-free. Thus, we can use and its advantageous properties to check if is collision-free at the expense of a small amount of conservativeness related to the difference in size between the outer sphere and inner sphere .

Ii-B Trajectory sections with monotonically changing depth

We split a given trajectory, e.g. , into different sections with monotonically increasing or decreasing distance along the z-axis of the depth-camera-fixed frame (i.e. into the depth image). First, we compute the rate of change of along as . Then, by solving for , we can find points at which might change direction along , defined as

(3)

Note can be computed in closed-form because it only requires finding the roots of the fourth order polynomial .

Splitting the trajectory into these monotonic sections is advantageous for collision checking because we can compute the point of each monotonic section with the deepest depth from the camera by simply evaluating the endpoints of the section. Thus, we can avoid performing collision checking with any obstacles at a deeper depth than the deepest point of the trajectory.

Ii-C Trajectory-Plane Intersections

A similar method can be used to quickly determine if and/or when a given trajectory intersects with an arbitrary plane defined by a point and unit normal . Let the distance of the trajectory from the plane be written as . The set of times at which intersects the given plane are then defined as

(4)

requiring the solution of the equation . Unfortunately, is in general a fifth order polynomial, meaning that its roots cannot be computed in closed-form and require more computationally expensive methods to find.

To this end, we extend [12] and [13] in presenting the conditions under which finding can be reduced to finding the roots of a fourth order polynomial. Specifically, if a single crossing time of is known a priori, can be solved by factoring out the known root and solving the remaining a fourth order polynomial. This property is satisfied, for example, by any plane with (i.e. a plane that intersects the initial position of the trajectory), resulting in the following equation of :

(5)

Thus, the remaining four unknown roots of (5) can be computed using the closed-form equation for the roots of a fourth order polynomial, allowing for to be computed extremely quickly. As described in the following section, we exploit this property in order to quickly determine when a given trajectory leaves a given pyramid.

Iii Algorithm Description

In this section we describe the our novel pyramid-based spatial partitioning method, its use in performing efficient collision checking, and the algorithm used to search for the best collision-free trajectory.111An implementation of the algorithm can be found at https://github.com/nlbucki/RAPPIDS

Iii-a Pyramid generation

For each depth image generated by the vehicle’s depth sensor, we partition the free space of the environment using a number of rectangular pyramids, where the apex of each pyramid is located at the origin of the depth camera-fixed frame (i.e. at ), and the base of each pyramid is located at some depth and is perpendicular to the z-axis of the depth camera-fixed frame as shown in Figure 1.

Fig. 1: 2D side view illustrating the generation of a single pyramid , shown in green, from a single depth image and given point . The depth values of each pixel are used to define the division between free space and occupied space . Because the depth camera has a limited field of view, we additionally consider any space outside the field of view farther than distance from the camera to be unknown space , which is treated the same as occupied space. The expanded pyramid (dash-dotted line) is first generated such that it does not contain any portion of or , and then used to define pyramid such that it is distance from any obstacles.

The depth value stored in each pixel of the image is used to define the separation of free space and occupied space . We additionally treat the space located outside the field of view of the camera at depth from the camera focal point as occupied space. Pyramid is defined such that while trajectory remains inside , the sphere containing the vehicle will not intersect with any occupied space, meaning that the segment of inside can be considered collision-free.

Note that if remains inside the pyramid, we can not only guarantee that the vehicle will not collide with any obstacles detected by the depth camera, but that the vehicle will not collide with any occluded obstacles either. This differs from other methods that treat each pixel in the depth image as an individual obstacle to be avoided, which can result in the generation over-optimistic trajectories that may collide with unseen obstacles. Furthermore, our method straightforwardly incorporates field of view constraints, allowing for the avoidance of nearly all unseen obstacles in addition to those detected by the depth camera.

The function InflatePyramid is used to generate a pyramid by taking an initial point as input and returning either a pyramid containing or a failure indicator. We start by projecting into the depth image and finding the nearest pixel . Then, pixels of the image are read in a spiral about pixel in order to compute the largest possible expanded rectangle that does not contain any occupied space. Finally, pyramid is computed by shrinking the expanded pyramid such that each face of is not with vehicle radius of occupied space. Further details regarding our implementation of InflatePyramid can be found online.footnotemark:

This method additionally allows for pyramid generation failure indicators to be returned extremely quickly. For example, consider the case where the initial point exists inside occupied space . Then, only the depth value of the nearest pixel must be read before finding that no pyramid containing can be generated, requiring only a single pixel of the depth image to be processed. This property greatly reduces the number of operations required to determine when a given point is in collision with the environment.

Iii-B Collision checking using pyramids

Algorithm 1 describes how the set of all previously generated pyramids is used to determine whether a given trajectory will collide with the environment. A trajectory is considered collision-free if it remains inside the space covered by for the full duration of the trajectory. An example of Algorithm 1 is shown in Figure 2.

1:input: Trajectory to be checked for collisions, set of all previously found pyramids , depth image
2:output: Boolean indicating if trajectory is collision-free, updated set of pyramids
3:function IsCollisionFree(, , )
4:     
5:     while  is not empty do
6:         
7:         
8:         
9:         if  is null then
10:              
11:              if  is null then
12:                  return false               
13:                        
14:         
15:         if  is not null then
16:                             
17:     return true
Algorithm 1 Single Trajectory Collision Checking

We first split the trajectory into sections with monotonically changing depth as described in Section II-B, and insert the sections into list using GetMonotonicSections (line 4). Then, for each monotonic section , we compute the deepest point (i.e. one of the endpoints of the section), and try to find a pyramid containing that point (line 6-8). The function FindContainingPyramid (line 8) returns either a pyramid that contains or null, indicating no pyramid containing was found. If no pyramid in contains , we attempt to generate a new pyramid using the method described in the previous subsection (line 10), but if pyramid generation fails we declare the trajectory to be in collision (line 12).

Fig. 2: 2D example of the collision checking method described by Algorithm 1 as used to check a given trajectory for collisions. The trajectory is first split into sections with monotonically changing depth, which are stored in list . Top: A single trajectory section is chosen from list . The deepest point of the trajectory section is computed and pyramid containing is generated. The trajectory  is then subdivided into a section that remains inside the pyramid (black) and a section that leaves the pyramid (gray). Bottom: The trajectory section that leaves is checked for collisions in the same manner. Pyramid is generated using the deepest point of the trajectory section, and then used to verify that the trajectory section does not collide with the environment.

Next, we try to compute the deepest point at which the monotonic section intersects one of the four lateral faces of the pyramid . Using the method described in Section II-C, we compute the times at which intersects each lateral face of the pyramid, and choose the time at which has the greatest depth (line 14). If is found to not collide with any of the lateral faces of the pyramid, then it necessarily must remain inside the pyramid and the section can be declared collision-free. However, if does collide with one of the lateral faces of the pyramid, we split it at and add the section of that is outside of the pyramid to (line 16). Thus, if each subsection of the trajectory is found to be inside the space covered by the set of pyramids , then the trajectory is declared collision-free (line 17).

Iii-C Planning algorithm

Algorithm 2 describes the path planning algorithm used in this work. The algorithm takes as input the most recently received depth image and vehicle state estimate, where the state estimate partially defines each candidate trajectory as given in (2). Within a user-specified computation time budget, the algorithm repeatedly generates and evaluates candidate trajectories for cost and the satisfaction of constraints, returning the lowest cost trajectory that satisfies all constraints. We choose to use a random search algorithm due to its simplicity and probabilistic optimality, though the collision checking algorithm presented in the previous subsection can be used in conjunction with other planning algorithms as well (see [14], for example).

1:input: Latest depth image and vehicle state
2:output: Lowest cost collision-free trajectory found or an undefined trajectory (indicating failure)
3:function FindLowestCostTrajectory()
4:      undefined with
5:     
6:     while computation time not exceeded do
7:         
8:         if  then
9:              if IsDynamicallyFeas(then
10:                  if IsCollisionFree(, , then
11:                                                                      
12:     return
Algorithm 2 Lowest Cost Trajectory Search

Let GetNextCandidateTraj be defined as a function that returns a randomly generated candidate trajectory using the methods described in [12] (line 7). The function Cost is a user-specified function used to compare candidate trajectories (line 8). In this work, we define Cost to be the following, where is a desired exploration direction:

(6)

That is, better trajectories are those that cause the vehicle to move as quickly as possible in the desired direction .

The function IsDynamicallyFeas (line 9) checks whether the given candidate trajectory satisfies constraints on the total thrust and angular velocity of the vehicle using methods described in [12]. Finally, the candidate trajectory is checked for collisions with the environment using IsCollisionFree (line 10). We check for collisions with the environment last because it is the most computationally demanding step of the process.

In this way, Algorithm 2 can be used as a high-rate local planner that ensures the vehicle avoids obstacles, while a global planner that may require significantly more computation time can be used to specify high-level goals (e.g. the exploration direction ) without the need to worry about obstacle avoidance or respecting the dynamics of the vehicle. We run Algorithm 2 in a receding-horizon fashion, where each new depth image is used to compute a new collision-free trajectory. We additionally constrain our candidate trajectories to bring the vehicle to rest, so that if no feasible trajectories are found during a given planning step, the last feasible trajectory can be tracked until the vehicle comes to rest.

Iv Algorithm Performance

In this section we assess the performance of the proposed algorithm in terms of its conservativeness in labeling trajectories as collision-free, its speed, and its ability to evaluate a dense set of candidate trajectories in various amounts of compute time. We additionally compare our method to other state-of-the-art memoryless planning algorithms.

To benchmark our collision checking method, we conduct various Monte Carlo simulations using a series of randomly generated synthetic depth images and vehicle states. Examples of several generated depth images are shown in Figure 3. The image is generated by placing two thick rectangles with random orientations in front of the camera at distances sampled uniformly at random on (, ). The initial velocity of the vehicle in the camera-fixed z-direction is sampled uniformly on (, ), and the initial velocity of the vehicle in both the x-direction and y-direction is sampled uniformly on (, ). We assume the camera is mounted such that is perpendicular to the thrust direction of the vehicle, and thus set the initial acceleration of the vehicle in both the and directions to zero. The initial acceleration in the direction is sampled uniformly on (, ).

The planner generates candidate trajectories that come to rest at randomly sampled positions in the field of view of the depth camera. Specifically, positions in the depth image are sampled uniformly in pixel coordinates and then deprojected to a depth that is sampled uniformly on (, ). The duration of each trajectory is sampled uniformly on (, ).

The algorithm was implemented in C++ and compiled using GCC version 5.4.0 with the -O3 optimization setting. Three platforms were used to assess performance: a laptop with an Intel i7-8550U processor set to performance mode, a Jetson TX2, and an ODROID-XU4. The algorithm is run as a single thread in all scenarios.

Fig. 3: Three examples of synthetic depth images used for benchmarking the proposed algorithm. Two rectangular obstacles are generated at different constant depths. The background is considered to be at infinite depth.

Iv-a Conservativeness

We first analyze the accuracy of the collision checking method described by Algorithm 1. A key property of our method is that it will never erroneously label a trajectory as collision-free that either collides with an obstacle or has the potential to collide with an occluded obstacle. Such a property is typically a requirement for collision checking algorithms used with aerial vehicles, as a trajectory mislabeled as collision-free can result in a catastrophic crash resulting in a total system failure.

However, because the generated pyramids cannot exactly describe the free space of the environment, our method may erroneously label some collision-free trajectories as being in-collision. In order to quantify this conservativeness, we compare our method to a ground-truth, ray-tracing based collision checking method capable of considering both field-of-view constraints and occluded obstacles. We define conservativeness as the number of trajectories erroneously labeled as in-collision divided by the total number of trajectories labeled (both correctly and incorrectly) as in-collision. A single test consists of first generating a synthetic scene and random initial state of the vehicle as previously described. We then generate 1000 random trajectories for each scene, and perform collision checking both with our method and the ground-truth method. The number of trajectories both correctly and incorrectly labeled as in-collision are averaged over such scenes. Additionally, in order to quantify how well the environment can be described using the pyramids generated by our method, we limit the number of pyramids the collision checking algorithm is allowed to use, and repeat this test for various pyramid limits.

Figure 4 shows how the over-conservativeness (i.e. the ratio of trajectories mislabeled as in-collision to the total number of trajectories labeled as in-collision) decreases as the number of pyramids allowed to be used for collision checking increases. The percent of mislabeled trajectories is initially higher because the environment cannot be described with high accuracy using fewer pyramids. However, conservativeness remains nearly constant for larger pyramid limits, indicating that our method may erroneously mislabel a small number of collision-free trajectories (e.g. those in close proximity to obstacles). Note that we do not limit the number of pyramids generated when using the planning algorithm described in Algorithm 2, as we have found it to be unnecessary in practice.

Fig. 4: Conservativeness of the collision checking algorithm as a function of the maximum number of pyramids allowed to be generated. We define conservativeness as the number of trajectories erroneously labeled as in-collision divided by the total number of trajectories labeled as in-collision. The free-space of the environment is described with increasing detail as more pyramids are allowed to be generated, leading to a lower number of trajectories being erroneously labeled as in-collision.

Iv-B Collision Checking Speed

Next we compare our collision checking method to the state-of-the-art k-d tree based methods described in [6] and [7]. Both our method and k-d tree methods involve two major steps: the building of data structures (i.e. a k-d tree, or the pyramids described in this paper) and the use of those data structures to perform collision checking with the environment. Our method differs from k-d tree based methods, however, in its over-conservativeness. Specifically, we consider trajectories that pass through occluded space to be in collision with the environment, while k-d tree based methods only consider trajectories that pass within the vehicle radius of detected obstacles to be in collision.

We compare our method to the k-d tree methods by first limiting the amount of time allocated for pyramid generation such that it is similar to the time required to build a k-d tree as reported in [6] and [7] (roughly ). We then check 1000 trajectories for collisions, and compute the average time required to check a single trajectory for collisions using the generated pyramids. Similar to [6] and [7], we use a resolution depth image which we generate using the previously described method, and average our results over Monte Carlo trails.

Table I shows how the average performance of our method outperforms the best-case results reported by [6] and [7]. The difference in computation time can be reasoned about using a time complexity analysis. Let a given depth image contain pixels. Then operations are required to build a k-d tree, while operations are required to generate a single pyramid (of which there are typically few). Checking a single trajectory for collisions using a k-d tree can require multiple k-d tree queries, each of which takes time. However, collision checking a single trajectory using our method can be done in near constant time, as it only requires solving several fourth order polynomials (which is done in closed-form) as described in Section II-C. Additionally, note that while an entire k-d tree must be built before being used to check trajectories for collisions, the pyramids generated by our method can be built on an as-needed basis, reducing computation time even further.

Computer
Single Trajectory
Collision Check ()
Florence et al.222The best reported average collision checking time required per trajectory is used for comparison.[6] i7 NUC 56
Lopez et al.2 [7] i7-2620M 48
RAPPIDS (ours) i7-8550U 1.20
RAPPIDS (ours) Jetson TX2 3.81
RAPPIDS (ours) ODROID-XU4 8.72
TABLE I: Average Collision Checking Time Per Trajectory

Iv-C Overall Planner Performance

Finally, we describe the overall performance of the planner, i.e. Algorithm 2, using the same Monte Carlo simulation but with resolution depth images, which are the same resolution as those used in the physical experiments described in the following section. The number of trajectories evaluated by the planner is used as a metric of performance, where a larger number of generated trajectories indicates a better coverage of the trajectory search space and thus higher likelihood of finding the lowest possible cost trajectory within the allocated planning time.

Figure 5 shows the results of running the planner for Monte Carlo trails each on the i7-8550U processor, the Jetson TX2, and the ODROID-XU4 for computation time limits between and . Naturally, as computation time increases, the average number of trajectories evaluated increases monotonically. Furthermore, we observe that the i7-8550U outperforms the Jetston TX2, which outperforms the ODROID-XU4. The difference in performance can be explained by the fact that the Jetston TX2 and especially the ODROID-XU4 are intended to be low-power devices capable of being used in embedded applications. However, due to the computational efficiency of our collision checking method, we found that even the ODROID-XU4 is capable of evaluating a sufficiently large number of trajectories within a small amount of time. This makes it feasible to use low-power devices such as the ODROID-XU4 as onboard flight controllers while still achieving fast, reactive flight.

Fig. 5: Average planner performance as a function of allocated computation time across various platforms. As computation time increases, the number of trajectories evaluated increases at different rates for platforms with different amounts of computation power.

V Experimental Results

In this section we demonstrate the use of the proposed algorithm on an experimental quadcopter, shown in Figure 6. The quadcopter has a mass of , and is equipped with an ODROID-XU4, a RealSense D435i depth camera, a RealSense T265 tracking camera, and a Crazyflie 2.0 flight controller. The ODROID is used in order to demonstrate the feasibility of running the proposed algorithm at high rates on cheap, low mass, and low power hardware. The tracking camera provides pose estimates to the ODROID at

, which a Kalman filter uses to compute translational velocity estimates. Filtered acceleration estimates are obtained at

using the IMU onboard the crazyflie flight controller. The depth camera is configured to capture resolution depth images at , and the proposed planning algorithm is run for when each new depth image arrives using the latest state estimate provided by the Kalman filter.

Fig. 6: Vehicle used in experiments. A RealSense D435i depth camera is used to acquire depth images, and a RealSense T265 tracking camera is used to obtain state estimates of the vehicle. The proposed algorithm is run on an ODROID-XU4, which sends desired thrust and angular velocity commands to a crazyflie 2.0 flight controller.

The vehicle was commanded to fly in a U-shaped tunnel environment that was previously unseen by the vehicle, shown in Figure 7. Each branch of the tunnel measured roughly in width and height, in length, and was filled with various obstacles for the vehicle to avoid. The candidate trajectories generated by the planner were generated using the same method described in Section IV. A video of the experiment is attached to this paper.333The video can also be viewed at https://youtu.be/89OLMirtXIk

Fig. 7: Visualization of flight experiment in U-shaped tunnel environment. The path of the vehicle is shown as a red line. The vehicle starts at the green sphere and ends at the red sphere. The map of the environment (top) is generated at the end of the experiment using depth images captured by the depth camera. Two images recorded by the onboard camera during the experiment are shown (bottom). A full video of the experiment is attached to this paper.3

The desired exploration direction used to compute the cost of each candidate trajectory as given by (6) is set as follows. We initialize to be perpendicular to the direction of gravity and to point down the first hallway. When the vehicle is at rest and no feasible trajectories are found by the planner, the desired exploration direction is rotated to the right of the vehicle, allowing the vehicle to navigate around corners. We then stop the test when the vehicle reaches the end of the second hallway. We use this method of choosing the exploration direction simply as a matter of convenience in demonstrating the use of our algorithm in a cluttered environment. However, many other suitable methods of providing high-level goals to our algorithm can be used (e.g. [15]), but are typically application dependent and thus are not discussed here.

During the experiment, the vehicle was able to find at least one collision-free trajectory in 35.3% of the planning stages. This includes planning stages where the vehicle files closely to obstacles, e.g. walls when coming to a stop, and thus is unable to find a feasible path forward because none exist. Of the planning stages where at least one feasible trajectory was found, 2069.2 candidate trajectories and 2.9 pyramids were generated on average. The vehicle traveled approximately over , and attained a maximum speed of .

Vi Conclusion

In this paper we presented a novel pyramid-based spatial partitioning method that allows for efficient collision checking between a given trajectory and the environment as represented by a depth image. The method allows multicopters to generate local collision-free trajectories at high rates with minimal computation, enabling vehicles with limited computational power to navigate cluttered environments at high speeds. A comparison to existing state-of-the-art depth-image-based path planning methods was performed via Monte Carlo simulation, showing our method to significantly reduce the computation time required to perform collision checking with the environment while being more conservative than other methods by implicitly considering occluded obstacles. Finally, real-world experiments were presented that demonstrate the use of our algorithm on computationally low-power hardware to perform fully autonomous flight in a previously unseen, cluttered environment.

Acknowledgement

This material is based upon work supported by the Berkeley Fellowship for Graduate Study. We gratefully acknowledge support from Berkeley DeepDrive through the project ‘Autonomous Aerial Robots in Dense Urban Environments’, as well as support from the China High-Speed Railway Technology Co., Ltd. The experimental testbed at the HiPeRLab is the result of contributions of many people, a full list of which can be found at hiperlab.berkeley.edu/members/.

References

  • [1] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, “Continuous-time trajectory optimization for online uav replanning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 5332–5339.
  • [2] F. Gao, Y. Lin, and S. Shen, “Gradient-based online safe trajectory generation for quadrotor flight in complex environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 3681–3688.
  • [3] S. Liu, M. Watterson, S. Tang, and V. Kumar, “High speed navigation for quadrotors with limited onboard sensing,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 1484–1491.
  • [4] J. Chen, T. Liu, and S. Shen, “Online generation of collision-free trajectories for quadrotor flight in unknown cluttered environments,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 1476–1483.
  • [5] J. Tordesillas, B. T. Lopez, and J. P. How, “Fastrap: Fast and safe trajectory planner for flights in unknown environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
  • [6] P. Florence, J. Carter, and R. Tedrake, “Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps,” in Workshop on the Algorithmic Foundations of Robotics (WAFR), 2016.
  • [7] B. T. Lopez and J. P. How, “Aggressive 3-d collision avoidance for high-speed navigation,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 5759–5765.
  • [8] L. Matthies, R. Brockers, Y. Kuwata, and S. Weiss, “Stereo vision-based obstacle avoidance for micro air vehicles using disparity space,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2014, pp. 3242–3249.
  • [9] J. Zhang, C. Hu, R. G. Chadha, and S. Singh, “Maximum likelihood path planning for fast aerial maneuvers and collision avoidance,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
  • [10] P. R. Florence, J. Carter, J. Ware, and R. Tedrake, “Nanomap: Fast, uncertainty-aware proximity queries with lazy search over local 3d data,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 7631–7638.
  • [11] M. Ryll, J. Ware, J. Carter, and N. Roy, “Efficient trajectory planning for high speed flight in unknown environments,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 732–738.
  • [12] M. W. Mueller, M. Hehn, and R. D’Andrea, “A computationally efficient motion primitive for quadrocopter trajectory generation,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1294–1310, 2015.
  • [13] N. Bucki and M. W. Mueller, “Rapid collision detection for multicopter trajectories,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
  • [14] S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimal motion planning,” The International Journal of Robotics Research, vol. 30, no. 7, pp. 846–894, 2011.
  • [15] T. Cieslewski, E. Kaufmann, and D. Scaramuzza, “Rapid exploration with multi-rotors: A frontier selection method for high speed flight,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 2135–2142.