End-to-End Motion Planning of Quadrotors Using Deep Reinforcement Learning

09/30/2019 ∙ by Efe Camci, et al. ∙ Aarhus Universitet Nanyang Technological University 0

In this work, a novel, end-to-end motion planning method is proposed for quadrotor navigation in cluttered environments. The proposed method circumvents the explicit sensing-reconstructing-planning in contrast to conventional navigation algorithms. It uses raw depth images obtained from a front-facing camera and directly generates local motion plans in the form of smooth motion primitives that move a quadrotor to a goal by avoiding obstacles. Promising training and testing results are presented in both AirSim simulations and real flights with DJI F330 Quadrotor equipped with Intel RealSense D435. The proposed system in action can be found in https://youtu.be/pYvKhc8wrTM.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomous navigation of quadrotors necessitates sensing [7, 15, 16, 20], planning [2, 13, 10, 1], and control [5, 18, 19, 11]. Separation of these tasks is the medium within the current state-of-the-art navigation methods. Each task is performed by an individual module and modularity is attained easily by this way. Nevertheless, modularity comes with the cost of possible incompatibility, especially with the presence of erroneous modules. An erroneous module in the pipeline could easily cause the other modules to fail as well. Therefore, in this work, the unification of these tasks is attempted within a single, reliable module using deep reinforcement learning (RL) [14, 4, 3, 6].

The proposed method aims to solve the following navigation problem: A quadrotor is deployed for navigation in a partially known environment. A rough path to the goal position is known without any obstacle location information on it. The quadrotor is supposed to generate local motion plans using this information and its online sensory data for safe and quick navigation. To this end, a deep RL agent is proposed which uses raw depth images from a front-facing camera to generate desirable motion primitive sequences. Particularly, a deep Q-network (DQN) with around 75,000 parameters is trained which takes a raw depth image and relative position information as its input, and yields a motion primitive selection as its output.

Ii Approach

In RL, it is critical to design the main elements properly: state, action, and reward. Each of these elements is explained in the following subsections while being depicted as the main elements of the proposed RL system in Fig. 1.

Fig. 1: Overview of the proposed RL system.

Ii-a State

State is the respective situation of the agent in the environment. It is designed as a pair of a depth image and relative position information. The depth image has the size of 3232 and it is obtained by the front-facing camera. It is included in the state definition in order to inform the agent about the obstacles around. The relative position on the other hand is a 3

1 vector. It is calculated by substracting the quadrotor’s position vector from the moving setpoint’s position vector, and transforming the resultant vector into the body frame of the quadrotor

, , . It is included in the state definition in order to inform the agent about its respective motion along the rough path towards the goal.

Ii-B Action

Action is the agent’s respective move at each time step following some policy to increase rewards. It is in the form of motion primitives whose basis is formed by Bézier curves. They are parametric curves based on Bernstein polynomials:

(1)

where are control points and are Bernstein polynomials of degree which are given as:

(2)

Smooth motion primitives in the position domain are generated by utilizing the cubic Bézier curves (n=3) for each finite time step. At each time step, the agent selects an action among the action set which consists of 18 different primitives (Fig. 2). This set is designed on the account that the agent’s motion would be biased towards forward because its only exteroceptive sensor is a front-facing depth camera.

Fig. 2: Motion primitives as action set.

Ii-C Reward

Reward is the signal which assesses the quality of the agent’s actions. It is designed based upon the quadrotor’s relative motion with respect to the moving setpoint on the initial rough path. It is equal to the change in the Euclidean distance () between the moving setpoint and the quadrotor position through a time step, discounted by the Euclidean distance between the moving setpoint and the quadrotor position at the end of the current time step (). The reward function also incorporates a primary logic based on excessive deviations from the initial path and collisions with obstacles. In this vein, higher rewards are obtained for motions close to the initial path. Lower rewards are obtained for motions away from the initial path. Collisions result in a drastic punishment. Excessive deviations from the initial path (the ones that are larger than 5m) result in a milder punishment. This reward logic is designed for the agent to learn obstacle avoidance and quick navigation towards the goal at the same time in unknown environments. It is mathematically defined as:

(3)

The variables and are the reward boundaries, and they are 0 and 0.5 respectively. The terms and are the reward saturation bounds on , and they are -1m and 1m, respectively. The term is the excessive deviation punishment and it is equal to -0.5. The term is the collision punishment and it is equal to -1. Most of these parameters are determined by trial-and-error method. In fact, they are flexible to be modified based upon the case study of interest.

Ii-D Algorithmic Details

The idea behind many RL algorithms is to estimate the action-value function

. This function can get intricate in the case of real robots due to extremely high number of state-action pairs. Therefore, function approximators have emerged as a common choice to estimate action-value function by adding a certain level of generalization [12]. In this work, a DQN is utilized for estimating the complex action-value function which is given as [12]:

(4)

where is the current state of the agent at the time step . The term is the action taken by the agent. The term is the next state that the agent reaches after taking the action . The term is the reward that the agent gets as a result of its action. The term is the discount factor which determines the present value of future rewards [17].

The DQN considered in this work is a combination of convolutional and fully connected neural networks which fuses two different inputs in two main lanes. The first lane uses hierarchical layers of convolutional filters to reason about correlations between local spatial portions of the depth image while the second lane utilizes fully connected layers to incorporate the relative position information. The architecture is depicted in detail in Fig.

3. The first convolutional layer in the first lane takes a 3232 raw depth image as the input and convolves it with 10

10 sized 8 filters with stride 2. It then applies a rectified linear unit (ReLU). The second and third convolutional layers use 16 filters of size 6

6 and 32 filters of size 33 with stride 1, respectively. They also apply ReLU after convolution. Subsequently, a fully connected layer of size 800 gets the output of the third convolutional layer and feeds it to the following fully connected layers of sizes 64. In the second lane, the relative position of the moving setpoint with respect to the quadrotor is fed to fully connected layers in three sub-lanes with sizes 16111The first sub-lane is designed to have a larger layer size as compared to the other two in order to put more emphasis on the relative position information in . This information is possibly more important for successful navigation to the goal considering the forward-biased motion of the quadrotor., 8, 8. They are then fully connected to a single layer of size 16. Subsequently, the outputs of the first lane (64) and the second lane (16) are concatenated and fed to the final fully connected layers of sizes 64, 32, and 18. All of the fully connected layers in this DQN structure apply ReLU except from the last one which eventually yields the Q-value estimation for each motion primitive. This network is trained using Huber loss [8] through Adam optimizer [9]

in PyTorch with the default settings.

Fig. 3: DQN architecture composed of convolutional and fully connected layers.

Iii Experiments

The experiments for validating the method consist of training and testing stages. Firstly, the agent is trained in seven different environments in AirSim in order to have diversified retrospective knowledge. Then, it is tested in these seven environments as well as three other environments which are unseen during the training stage. Lastly, the agent is also deployed for preliminary real flight tests to demonstrate the applicability of the proposed method for real vehicles.

Iii-a Training in AirSim

The training environments in AirSim are depicted in Fig. 4. The environments are diversified in terms of their complexity, from obstacle-free environment (Env. 1) and obstacle-free corridors with different widths (Env. 2, Env. 3) to left-right (Env. 4, Env. 5) and up-down (Env. 6, Env. 7) slalom environments. All of these environments are merged in a single AirSim session, and they are visited by the agent randomly. Merging them together is particularly helpful because diversification of data samples during learning is improved by this way, enhancing the data sample efficiency. Since the main aim in this work is to develop a simple RL system with relatively early convergence for navigation of quadrotors, high data sample efficiency can be substantially useful. In the same vein, it is also attempted to elude the common need for a large number of interactions in RL because high fidelity AirSim simulations run on real-time. In AirSim, it may require weeks or months to obtain the same amount of data which could have been obtained from accelerated, simple simulation environments in a single day.

The convergence pace in RL depends on many factors such as randomized data, network size, ,

, learning rate of the back-propagation algorithm used for DQN, architecture of DQN, etc. For the sake of brevity, only a single set of hyperparameters is considered which is observed to be working well throughout the training trials during this work. The minimum number of episodes required for convergence is found out by fixing the network structure,

, and for five training sessions which consist of 100, 200, 500, 1000, and 2000 episodes. In each session, a linear decay from 1.0 to 0.1 and a linear gain from 0.01 to 0.99 for the first 80% of total episodes are utilized, while these parameters are kept constant for the next 20%.

Fig. 4: Training environments: obstacle-free, corridor, and slalom tracks.

The average rewards obtained for each session are depicted in Fig. 5. The average reward increases with the increased maximum number of episodes since the agent has more chances to interact with the environment and learn more. During the first training with 100 episodes which takes around 1-2 hours, the agent can barely reach positive average reward values. Learning is not visible for this case. In the second training with 200 episodes, a similar trend is observed without a visible increment on average rewards. In the third case with 500 episodes, the reward increment starts to become visible but there are still relatively lower values towards the end. In the fourth and fifth cases with 1000 and 2000 episodes respectively, increasing reward trend is quite prominent. Final average rewards are higher in these cases, while the agent trained for 2000 episodes has slightly better performance. The maximum number of episodes is limited at 2000 because the training takes around 20-22 hours for this case, beyond which the main motivation of this work (a practical RL system with early convergence) would be lost.

Fig. 5: Average rewards for five different training sessions with maximum numbers of episodes of 100, 200, 500, 1000, 2000.

Iii-B Testing in AirSim

In this section, the performance of the agent trained for 2000 episodes is evaluated through navigation from the start to the goal in ten different environments in AirSim. The first seven environments are the same as in training sessions, where the last three environments are previously unseen and they are depicted in Fig. 6. While the former group examines whether the agent is competent on exploring a desirable policy within training environments, the latter group challenges the agent in terms of generalization to different environments. They consist of obstacles with different shapes and sizes, corridors with different widths but the same length222The tracks with the same straight length are considered for consistent benchmarking. However, the proposed method can easily be employed in different environments with different lengths of straight lines, a concatenation of straight lines, or turns. Once the intermediate goal points are given to create straight path segments as initial rough paths, it is trivial to utilize the method in these environments since the underlying RL system is designed with respect to the quadrotor’s body frame. of 60m.

Env. Navigation Navigation Crash Total
distance (m) time (s) reward
Env.1 60.00 57 N 12.15
60.00 57 N 11.89
60.00 57 N 10.94
60.00 58 N 12.66
60.00 57 N 11.51
Env.2 60.00 58 N 11.93
60.00 57 N 12.49
60.00 58 N 12.43
60.00 57 N 12.28
60.00 58 N 11.83
Env.3 60.00 58 N 10.35
60.00 58 N 11.93
60.00 57 N 12.16
60.00 58 N 11.96
60.00 58 N 12.14
Env.4 60.00 79 N 3.82
60.00 71 N 4.44
48.85 120 N 3.61
29.96 42 Y 1.32
28.15 33 Y 1.25
Env.5 60.00 78 N 3.41
60.00 75 N 3.58
60.00 71 N 3.53
21.35 28 Y 0.51
48.57 120 N 3.39
Env.6 60.00 61 N 4.85
44.44 120 N 4.35
60.00 70 N 4.46
60.00 61 N 4.74
60.00 61 N 4.81
Env.7 50.42 120 N 3.26
41.51 58 Y 0.53
15.90 77 Y 0.14
60.00 95 N 3.27
47.33 120 N 3.04
Env.8 60.00 59 N 7.69
60.00 59 N 7.61
45.47 44 Y 5.82
45.39 45 Y 4.52
60.00 58 N 9.01
Env.9 60.00 57 N 9.24
60.00 58 N 9.31
60.00 57 N 8.78
60.00 57 N 8.24
60.00 58 N 8.60
Env.10 60.00 61 N 5.35
60.00 58 N 5.29
60.00 61 N 4.95
60.00 60 N 5.35
60.00 60 N 5.88
TABLE I: Evaluative test results in AirSim.
Fig. 6: Test environments with obstacles of different shapes and sizes.

The agent is tested in each environment five times to have generalized evaluative results. Table I states the values for reward, navigation distance, navigation time, and crash rate. Although the initial rough path in these environments (a straight path of 60m from the start to the goal) can be navigated within 60s in the obstacle-free configuration, the maximum number of time steps is set to 120 during the testing stage in order to account for the desired deviations from the path for obstacle avoidance.

In the first three environments, the agent is able to learn a policy which is close to the optimal. Successful navigation from the start to the goal is observed for each trial in these relatively simple environments. The navigation time performance is also desirable as being close to 60s in 1m/s setting. If the common position controllers of the quadrotor would have been used without any local motion planning, they would have taken a similar amount of time for following the path from the start to the goal. The crash rate is 0 in these environments, even for the narrow corridor in Env. 3. Therefore, the agent’s performance is favorable for the first group.

In the next four environments, the agent’s performance is little degraded with the added complexity. Still, the agent successfully reaches the goal position in 67% of the trials. In terms of safety, the agent completes 73% of the trials without a crash in these dense environments. Moreover, there is no single head-on crash. All the crashes occur when the quadrotor moves towards sides or up-down without going forward and obstacles are out of sight of the front-facing camera. This limitation of the proposed method results from the lack of a history of depth images (which can be a part of the future work). On the other hand, there are some trials such as the third one in Env. 4 in which the agent cannot reach the goal position but does not take decisions which might yield a crash either. It just keeps its position around the same particular obstacle-free portion of the environments in order to maintain safety. Overall, the agent’s performance is promising considering the density level of these environments.

In the last three environments, the agent yields better performance as compared to the former group. It successfully reaches the goal position in 87% of the trials. The other 13% stands for two trials in Env. 8 in which there is a large sphere blocking the narrow corridor towards the end. The agent cannot avoid it fully and the landing gear touches to the sphere in these two trials. Apart from these two incidents, the agent is able to avoid obstacles of different shapes and sizes successfully and reach the goal position within a decent navigation duration. Therefore, the agent’s performance can be considered as desirable in terms of generalizing to previously unseen environments.

Iii-C Testing in Real Flights

For the real flight testing, the agent trained for 2000 episodes is deployed directly on a DJI F330 Quadrotor equipped with the flight controller PX4 FMU, the companion computer Nvidia Jetson TX2, and the extrecoptive sensor Intel RealSense D435. All the planning codes are running on TX2 utilizing its GPU for DQN in PyTorch with Cuda option. D435 provides live depth images to DQN while PX4 is responsible for the low-level control. The odometry information is obtained using a motion capture system.

Two different environments are created for the real flights: one as obstacle-free and the other one with two large rectangular shape obstacles (Fig. 7). Since the lab space is limited to roughly 3m by 3m area, the initial path of 3.5m is set diagonally. In order not to cause dangerous movements of the quadrotor, the approximate speed of the vehicle is considered as 0.5m/s. Accordingly, the length of the motion primitives is decreased to a maximum movement of 0.5m in each axis. The maximum number of time steps are set to 30 in order to account for both the desired deviations from the initial path and the decreased length of the primitives. The rest of the parameters are exactly the same as in AirSim. Table II summarizes the real flight test results.

In the first environment without obstacles, the agent’s performance is desirable. There is no single crash and the agent reaches to the goal point with 100% success out of five trials as can be seen in Table II. The navigation time is relatively high as compared to the results in AirSim though. This is due to the motion primitive selections which cause deviations from the initial path generally in z axis and the control ability of the quadrotor in the real flights. Regarding the former, since the agent is getting the depth image from D435 which yields relatively noisy data as compared to AirSim, the agent’s task is more difficult in the real flights. It is supposed to conduct sufficient reasoning by handling these relatively unclear images implicitly. As regards to the latter, the PX4’s position controller with default parameters for DJI F330 Quadrotor is used in order to execute the motion primitive sequences governed by DQN. As compared to the ideal conditions in AirSim in which perfect odometry information without any delay as well as a precise controller to execute the high-level commands are available without considering any motor-ESC-propeller efficiency issues, the real flight control ability is obviously less. Still, the agent yields adequate end-to-end reasoning and completes the task with 100% accuracy in the first environment.

In the second environment with two large rectangular shape obstacles, the agent’s performance is relatively downgraded in terms of navigation to the goal. It cannot reach the goal position out of five trials. The furthest navigation point on the initial rough path is recorded as 1.55m in the third trial as can be seen in Table II. On the other hand, the agent yields safe flights for 80% of the trials in this environment. Only in the second trial, it crashes into the first obstacle. Again, it is not a head-on crash, the propeller just touches the obstacle from the side when the obstacle is out of sight of D435. In the other four trials, the agent avoids the first obstacle but then maintains its location in obstacle-free portions of the environment. Therefore, the agent’s response in this environment can be considered as conservative.

Env. Navigation Navigation Crash Total
distance (m) time (s) reward
Env.1 3.50 21 N 0.44
3.50 18 N 0.46
3.50 21 N 0.44
3.50 19 N 0.43
3.50 22 N 0.43
Env.2 0.65 30 N 0.34
1.03 15 Y -0.70
1.55 30 N 0.32
1.38 30 N 0.34
0.80 30 N 0.37
TABLE II: Evaluative test results in real flights.
Fig. 7: Real flight environments. Left: Environment without obstacles. Right: Environment with two large rectangular shape obstacles.

Iv DISCUSSION and FUTURE WORK

As can be seen from the results, the proposed method demonstrates sufficient and promising performance for navigation of quadrotors in partially known environments. The agent yields fairly safe flights with collision-free flight percentage of 86% over the trials in AirSim. This number is 90% for real flights with relatively simpler environments. It reaches the goal point successfully during 88% of these safe flights in AirSim, while this percentage is 56% for real flights. Besides, the method caters for generalization to previously unseen environments which is particularly important to prove possible employability in a broad class of environments.

Overall, this work serves as a proof of concept for employing the proposed end-to-end RL-based motion planning system for quadrotor navigation in dense environments. It only includes specific case studies for brevity. Future work will include a comprehensive benchmarking study by considering a wider set of hyperparameters, different network structures, and more extensive experimental tests. Although the preliminary real flight test results demonstrate relatively safe navigation, the ultimate aim is to enhance the proposed method’s performance on navigation to the goal, and present a simple and useful tool for the robotics community.

Acknowledgment

We would like to acknowledge the Air Lab members Rogerio Bonatti, Wenshan Wang, and Sebastian Scherer at Carnegie Mellon University, PA, USA for substantially helpful discussions. This work is financially supported by the Singapore Ministry of Education (RG185/17).

References

  • [1] E. Camci and E. Kayacan (2019) Learning motion primitives for planning swift maneuvers of quadrotor. Autonomous Robots 43 (7), pp. 1733–1745. Cited by: §I.
  • [2] E. Camci and E. Kayacan (2019) Planning swift maneuvers of quadcopter using motion primitives explored by reinforcement learning. In 2019 American Control Conference (ACC), pp. 279–285. Cited by: §I.
  • [3] C. Do, C. Gordillo, and W. Burgard (2018) Learning to pour using deep deterministic policy gradients. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3074–3079. Cited by: §I.
  • [4] M. Everett, Y. F. Chen, and J. P. How (2018) Motion planning among dynamic, decision-making agents with deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3052–3059. Cited by: §I.
  • [5] M. Greeff and A. P. Schoellig (2018) Flatness-based model predictive control for quadrotor trajectory tracking. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6740–6745. Cited by: §I.
  • [6] M. Gschwindt, E. Camci, R. Bonatti, W. Wang, E. Kayacan, and S. Scherer (2019) Can a robot become a movie director? learning artistic principles for aerial cinematography. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I.
  • [7] A. S. Huang, A. Bachrach, P. Henry, M. Krainin, D. Maturana, D. Fox, and N. Roy (2017) Visual odometry and mapping for autonomous flight using an rgb-d camera. In Robotics Research, pp. 235–252. Cited by: §I.
  • [8] P. J. Huber et al. (1964) Robust estimation of a location parameter. The annals of mathematical statistics 35 (1), pp. 73–101. Cited by: §II-D.
  • [9] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-D.
  • [10] S. Lai, M. Lan, and B. M. Chen (2018) Optimal constrained trajectory generation for quadrotors through smoothing splines. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4743–4750. Cited by: §I.
  • [11] M. Mehndiratta, E. Camci, and E. Kayacan (2018) Automated tuning of nonlinear model predictive controller by reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3016–3021. Cited by: §I.
  • [12] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §II-D.
  • [13] B. Morrell, R. Thakker, G. Merewether, R. Reid, M. Rigter, T. Tzanetos, and G. Chamitoff (2018) Comparison of trajectory optimization algorithms for high-speed quadrotor flight near obstacles. IEEE Robotics and Automation Letters 3 (4), pp. 4399–4406. Cited by: §I.
  • [14] C. Sampedro, A. Rodriguez-Ramos, I. Gil, L. Mejias, and P. Campoy (2018) Image-based visual servoing controller for multirotor aerial robots using deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 979–986. Cited by: §I.
  • [15] J. Stevens and R. Mahony (2018) Vision based forward sensitive reactive control for a quadrotor vtol. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5232–5238. Cited by: §I.
  • [16] K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar (2018) Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robotics and Automation Letters 3 (2), pp. 965–972. Cited by: §I.
  • [17] R. S. Sutton and A. G. Barto (1998) Reinforcement learning: an introduction. Vol. 1, MIT press Cambridge. Cited by: §II-D.
  • [18] E. Tal and S. Karaman (2018) Accurate tracking of aggressive quadrotor trajectories using incremental nonlinear dynamic inversion and differential flatness. In 2018 IEEE Conference on Decision and Control (CDC), pp. 4282–4288. Cited by: §I.
  • [19] G. Tang, W. Sun, and K. Hauser (2018) Learning trajectories for real-time optimal control of quadrotors. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3620–3625. Cited by: §I.
  • [20] S. Yang and S. Scherer (2019) CubeSLAM: monocular 3-d object slam. IEEE Transactions on Robotics. Cited by: §I.