Robot Navigation with Map-Based Deep Reinforcement Learning

02/11/2020 ∙ by Guangda Chen, et al. ∙ USTC 0

This paper proposes an end-to-end deep reinforcement learning approach for mobile robot navigation with dynamic obstacles avoidance. Using experience collected in a simulation environment, a convolutional neural network (CNN) is trained to predict proper steering actions of a robot from its egocentric local occupancy maps, which accommodate various sensors and fusion algorithms. The trained neural network is then transferred and executed on a real-world mobile robot to guide its local path planning. The new approach is evaluated both qualitatively and quantitatively in simulation and real-world robot experiments. The results show that the map-based end-to-end navigation model is easy to be deployed to a robotic platform, robust to sensor noise and outperforms other existing DRL-based models in many indicators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

One of the major challenges for robot navigation is to develop a safe and robust collision avoidance policy to navigate from the starting position to desired goal without running into obstacles and pedestrians in unknown cluttered environments. Although numerous methods have been proposed [12], conventional methods are often built upon a set of assumptions that are likely not to be satisfied in practice [26], and may impose intensive computational demand [27]. In addition, conventional algorithms normally involve a number of parameters that need to be tuned manually [14] rather than being able to learn from past experience automatically [5]. It is difficult for these approachs to generalize well to unanticipated scenarios.

Recently, several supervised and self-supervised deep learning approaches have been applied to robot navigation. Giusti et al.

[4]

used a Deep Neural Network to classify the images to determine which action will keep the quadrotor on the trail. Lei et al.

[17] showed the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process, which is a highly compact network structure that takes raw depth images as input, and generates control commands as network output. Pfeiffer et al. [13]

presented a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. However, there are some limitations that prevent these approaches from being widely applied in a real robotic setting. For instance, a massive manually labeled dataset is required for the training of the supervised learning approaches. Although this can be migrated to an extent by resorting to self-supervised learning methods, their performance is largely bounded by the strategy generating training labels.

On the other hands, deep reinforcement learning (DRL) methods have achieved remarkable success in many challenging tasks [16, 8, 21]. Different from previous supervised learning methods, DRL based approaches learn from a large number of trials and corresponding rewards instead of labeled data. In order to learn a sophisticated control policy with reinforcement learning, robots need to interact with the environment for a long period to accumulate knowledge about the consequences of different actions. Collecting such interaction data in real world is expensive, time consuming, and sometimes infeasible due to safety issues [24]. For instance, Kahn et al. [5] proposed a generalized computation graph that subsumes value-based model-free methods and model-based methods, and then instantiated this graph to form a navigation model that learns from raw images and is sample efficient. However, It takes hours of destructive self-supervised training to navigate only dozens of meters without collision through a indoor environment. Because of the excessive number of trials required to learn a good policy, training in a simulator is more suitable than experiences derived from the real world. Then we can exploit the close correspondence between a simulator and the real world, to transfer the learned policy.

According to the difference of input data, the existing reinforcement learning-based robot motion planning methods can be roughly divided into two categories: agent-level inputs and sensor-level inputs. And there are different transferability to real world between different input data. As representatives of agent-level methods, Chen et al. [3] train an agent-level collision avoidance policy using DRL, which maps an agent’s own state and its neighbors’ states to collision-free action. However, it demands perfect sensing. Obviously, this complex pipeline not only requires expensive online computation but makes the whole system less robust to the perception uncertainty.

As for sensor-level inputs, the types of sensor data used in DRL-based navigation mainly include 2D laser range inputs, depth images and color images. The network proposed in [18] outputs control commands based on ten-dimensional laser range inputs and is trained using an asynchronous DRL algorithm. Similarly, the models introduced in [10] and [24] also derive the steering commands from laser range sensors. 2D laser-based methods are competitive in terms of the transferability to real world because of the smaller discrepancies between their simulated and real domains. However, 2D sensing data is not enough to describe complex 3D scenarios. On the contrary, vision sensors can provide 3D sensing informations. But RGB images suffer from the significant deviation between real-world situations and the simulation environments during training, which leads to quite limited generalization across situations. Compared to RGB images, the depth inputs in simulations exhibit much better visual fidelity due to the textureless nature and, as a result, greatly alleviate the burden of transferring the trained model to real deployments [23]. Based on depth images, Zhang et al. [25] proposed to use successor features to achieve efficient knowledge transfer across tasks in depth-based navigation. Currently, all existing sensor-level works rely on specific sensor types and configurations. While in complex environments, most robots are equipped with different sensors to navigate autonomously and safely [2].

In this paper, we propose an end-to-end model-free deep reinforcement learning algorithm to improve the performance of autonomous decision making in complex environments, which directly maps local probabilistic costmaps to an agent’s steering commands in terms of target position and robot velocity. Compared to previous work on DRL-based obstacle avoidance, our motion planner is based on probabilistic costmaps to represent environment and target position, which enables the learned collision avoidance policy to handle different types of sensor input efficiently, such as the multi-sensor information from 2D/3D range scan finders or RGB-D cameras [9]. And our trained CNN-based policy is easily transferred and executed on a real-world mobile robot to guide its local path planning and robust to sensor noise. We evaluate our DRL agents both in simulation and on-robot qualitatively and quantitatively. Our results show the improvement in multiple indicators over the DRL-based obstacle avoidance policy.

Our main contributions are summarized as follow:

  • Formulate the obstacle avoidance for mobile robots as an DRL problem based on a generated costmap, which can handle multi-sensor fusion and is robust to sensor noise.

  • Integrate curriculum learning technique to enhance the performance of dueling double DQN with prioritized experience reply.

  • Contract a variety of real-world experiments to reveal the high adaptability of our model to transfer to different sensor configurations and environments.

The rest of this paper is organized as follows. The structure of the DRL-based navigation system is presented in Section II. The deep reinforcement learning algorithm for obstacle avoidance based on egocentric local occupancy maps is described in Section III. Section IV presents experimental results, followed by conclusions in Section V.

Ii System Structure

[width=0.95]fig/flow_color.png Multi-sensorDataSLAMPathPlannerCostmapGeneratorRobot VelocityLocalGoalCostmapDRLPlannerBaseController

Fig. 1: Block diagram of the DRL-based navigation system for autonomous robots.

The proposed DRL-based mobile robot navigation system consists of six modules. As shown in Fig. 1

, simultaneous localization and mapping (SLAM) module establishes an environment map based on sensor data, and can estimate the position and velocity of the robot in the map simultaneously. When a target position is received, the path planner module generates a path or a series of local goal points from the current position to the target position. In order to cope with the dynamic and complex environments, a safe and robust collision avoidance policy in unknown cluttered environments is required. In addition to the local goal points from the path planning module and robot velocity provided by the positioning module, our local planner also needs the input of the surrounding environment information, which is the egocentric occupancy map produced by the costmap generator module based on multi-sensor data. Generally speaking, Our DRL-based local planner takes the velocity information generated by the SLAM module, the local goal generated by the global path planner and the cost maps from the generator which can fuse multi-sensor information, and outputs the linear velocity

and angular velocity of the robot. Finally, the output speed command is executed by the base controller module which depends on the specific kinematics of the robot.

Iii Approach

We begin this section by introducing the problem formulation of the local obstacle avoidance. Next, we describe the key ingredients of our reinforcement learning algorithm and the details about the network architecture of the collision avoidance policy.

Iii-a Problem Formulation

At each timestamp , given a frame sensing data , a local goal position in the robot coordinate system and the robot linear velocity , angular velocity , the proposed local obstacle avoidance policy provides an action command as follows:

(1)
(2)

where is a local cost map describing the obstacle avoidance task, and are model parameters. Specifically, the cost map is constructed as an aggregate of robot configurations and the obstacle penalty, which will be explained below.

Hence, the robot collision avoidance problem can be simply formulated as a sequential decision making problem. The sequential decisions consisting of observations and actions (velocities) () can be considered as a trajectory from its start position to its desired goal , where is the traveled time. Our goal is to minimize the expectation of the arrival time and take into account that robots do not collide, which is defined as:

(3)

where is the position of obstacle and is the robot radius.

Iii-B Dueling DDQN with prioritized experience reply

Markov Decision Processes (MDPs) provide a mathematical framework to model stochastic planning and decision-making problems under uncertainty. An MDP is a tuple , where S indicates the state space, A is the action space, P

indicates the transition function which describes the probability distribution over states if an action

a is taken in the current state s, R represents the reward function which illustrates the immediate state-action reward signal, and is the discount factor. In an MDP problem, a policy specifies the probability of mapping state to action . The superiority of a policy can be assessed by the action-value function (Q-value) defined as:

(4)

Therefore, the action-value function is the expectation of discounted sums of rewards, given that ation is taken in state and policy is executed afterwards. The objective of the agent is to maximize the expected cumulative future reward, and this can be achieved by adopting the Q-learning algorithm which approximates the optimal action-value function iteratively using the Bellman equation:

(5)

Combined with deep neural networks, DQN enables reinforcement learning to cope with complex high-dimensional problems. Generally, DQN maintains two deep neural networks, including an online network with parameters and a separate target network with parameters

. The parameters of the online network are updated constantly by minimizing the loss function

, where can be calculated as follows:

(6)

And the parameters of the target network are fixed for generating Temporal-Difference (TD) targets and synchronized regularly with those of the online network.

Conventional Q-learning is affected by an overestimation bias, due to the maximization step in Equation (6), which would harm the learning process. Double Q-learning [20], addresses this overestimation by decoupling, in the maximization performed for the bootstrap target, the selection of the action from its evaluation. Therefore, if episode not ends, the in the above formula is rewritten as follows:

(7)

In this work, dueling networks [22] and prioritized replay [15] are also deployed for more reliable estimation of Q-values and sampling efficiency of replay buffer respectively. In the following, we describe the details of the observation space, the action space, the reward function and the network architecture.

Iii-B1 Observation space

Fig. 2: Gazebo training environments (left) and corresponding occupancy map displayed by Rviz (right).

As mentioned in Section III-A, the observation consists of the generated costmaps , the relative goal position and the robot’s current velocity . Specifically, represents the cost map images generated from a 180-degree laser scanner or other sensors. The relative goal position

is a 2D vector representing the goal coordinate with respect to the robot’s current position. The observation

includes the current transitional and rotational velocity of the differential-driven robot.

We use layered costmaps [11] to represent environmental information perceived by multi-sensors. Then though the map generater module, we get the state maps by drawing the robot configuration (shape) into the layered costmaps. Fig. 2 shows an example of generated occupancy maps.

Iii-B2 Action space

The action space is a set of permissible velocities in discrete space. The action of differential robot includes the translational and rotational velocity, i.e. . In this work, considering the real robot’s kinematics and the real world applications, we set the range of the translational velocity and the rotational velocity in . Note that backward moving (i.e. ) is not allowed since the laser range finder can not cover the back area of the robot.

Iii-B3 Reward

[width=0.95]fig/network_dueling.png ReLU32 filters

ReLU64 filters

ReLU64 filters

Local mapsGoalVelocity

64 vectors

ReLUReLU

64 filters64 filters64 filters

Fig. 3: The architecture of our CNN-based dueling DQN network. This network takes three local maps and a vector with local goal and robot velocity as input and outputs the Q values of 28 discrete actions.

The reward function in reinforcement learning implicatly specifies what the agent is encourage to do. Our objective is to avoid collisions during navigation and minimize the mean arrival time of the robot. A reward function is designed to guide robots to achieve this objective:

(8)

The reward at time step is a weighted sum of three terms: , and . In particular, the robot is awarded by for reaching its goal:

(9)

When the robot collides with other obstacles in the environment, it is penalized by :

(10)

And we also give robots a small fixed penalty at each step. We set = 500, = 10, = -500 and in the training procedure.

Iii-B4 Network Architecture

We define a CNN-based deep convolutional neural network that computes the action-value function for each actions. The input of the network has three local maps M with grey pixels and a four-dimensional vector with local goal g and robot velocity v. The output of the network is the Q-values for actions. The architecture of our deep Q-value network is shown in Figure 3. The input costmaps are fed into a

convolution with stride 4, followed by a

convolution with stride 2 and a convolution with stride 1. The local goal and robot velocity form a four-dimensional vector, which is processed by one fully connected layer, and is then pointwise added to each point in the response map of processed costmap by tiling the output over the special dimensions. The result is then processed by three convolutions and three fully connected layers with 512, 512 units respectively, and then fed into the dueling network architecture, after which the network outputs the Q values of 28 discrete actions.

Iii-C Curriculum Learning

Curriculum learning [1]

is a learning strategy in machine learning, which starts from easy instances and then gradually handles harder ones. In this work, we use Gazebo simulator

[7] to build an environment with multiple obstacles. As the training progresses, we gradually increase the number of obstacles in the environment, and also the distance from the starting point of the target point gradually increases. This makes our strategy training from easy situation to difficult ones. At the same time, the position of each obstacle and the start and end points of the robot are automatically random during all training episodes. One training scene is shown in Fig. 2.

Iv Experiments

In this section, we present experiment setup and evaluation in both simulated and real environments. We quantitatively and contrastively evaluate the DQN-based navigation policy to show that it outperforms other related methods in multiple indicators. Moreover, we also performed qualitative tests on real robots, and also integrated our obstacle avoidance policy into the navigation framework for long-range navigation testing.

Iv-a Reinforcement Learning Setup

The training experiments are conducted with a customized differential drive robot in a simulated virtual environment using Gazebo. A 180-degree laser scanner is mounted on the front of the robot as shown in Fig. 2. The system parameters are empirically determined in terms of both the performance and our computation resource limits as listed in Table LABEL:table1.

Parameter Value
learning rate
discount factor
replay buffer size
minibatch size
image size
episode length
initial exploration
final exploration
TABLE I: System Parameters

The implementation of our algorithm is in TensorFlow, and we train the deep Q-network in terms of objective function Eq.

7 with the Adam optimizer [6]. The training hardware is a computer with an i9-9900k CPU and a single NVIDIA GeForce 2080 Ti GPU. The entire training process (including exploration and training time) takes about 10 hours for the policy to converge to a robust performance.

Fig. 4: Indicator curves of different methods during training.

Iv-B Experiments on simulation scenarios

Iv-B1 Performance metrics

To compare the performance of our approach with other methods over various test cases, we define the following performance metrics.

  • Expected return is the average of the sum of rewards of episodes.

  • Success rate is the ratio of the episodes in which the robot reaching the goals within a certain step without any collisions over the total episodes.

  • Reach step is the average number of steps required to successfully reach the target point without any collisions.

  • Average angular velocity change is the average of the angular velocity changes for each step, which reflects the smoothness of the trajectory.

Iv-B2 Comparative experiments

Method
PPO with 1d conv
467.87
0.85
40.19
0.46
Normal DQN
547.43
0.91
27.76
0.39
Curricular DQN
617.04
0.94
26.13
0.35
TABLE II: Indicators values of various methods

We compare our curricular DQN policy with normal (non-curricular) DQN policy and PPO with one-dimensional convolution network [10] in our tests. As shown in Fig. 4, our DQN-based policy has significant improvement over PPO policy in terms of expected retrun, success rate, reach step and average angular velocity change, and curricular DQN policy has also a slight improvement on multiple indicators. In the tests with more obstacles, the specific indicators values of various method are shown in the Table LABEL:table2. Fig. 5 shows a test case of our curricular DQN policy in a test scenario.

Fig. 5: A test case of our curricular DQN policy in a test scenario, the green dot represents the starting point, the red dot represents the end point, and the robot’s trajectory is marked with purple arrows.
Fig. 6: Reach rate over laser noise. DQN policy trained with sensor noise compared to policy without sensor noise and traditional VFH method.

Iv-B3 Robustness to noise

Fig. 6 depicts that the performances of our DQN-based policy and the traditional vector field histogram (VFH) method [19] vary with the noise error of the laser sensor data in an environment with many obstacles. Results show that our DQN-based policy is resilient to noise, and laser noise heavily influences the reach rate of VFH. This is expected since VFH uses obstacle clearance to calculate its objective function, and such a greedy approach often guides the robot to local minima. More importantly, the learned policy (the DQN policy with noise in the Fig. 6

) will work better when using the same noise variance as the test environment during training.

Iv-C Navigation in real-world

To further verify the generalization and effectiveness of our learning policy, we use our robot chassis to do experiments in real-world. As shown in Fig. 7, our robot platform is a differential wheel robot with a Hokuyo UTM-30LX scanning laser rangefinder and a laptop with an i7-8750H CPU and a NVIDIA 1060 GPU. The robot pose and velocity are provided by a particle filter based state estimator. An occupancy map is constructed from laser measurement, from which an egocentric local map is cropped to fixed size m and resolution 0.1m at each cycle.

We used paper boxes to build different difficult environments for testing. As shown in Fig. 7, when the robot confronts obstacles, the trained policy succeeds in providing a reactive action command that drives the robot away from obstacles. In the long-range experiments, our robots navigate safely in corridors with obstacles and pedestrians. A video for real and simulated navigation experiments can be found at https://youtu.be/Eq4AjsFH_cU.

[width=0.9]fig/exp_real.jpg 2d laser scanner

Fig. 7: Our robot chassis with a laptop and a Hokuyo UTM-30LX laser scanner (left). Real test environment(right), including difficult obstacle environments (upper right) and long-range corridor test (bottom right).

V Conclusions

In this paper, we propose a model-free deep reinforcement learning algorithm to improve the performance of autonomous decision making in complex environments, which directly maps egocentric local occupancy maps to an agent’s steering commands in terms of target position and movement velocity. Our approach is mainly based on dueling double DQN with prioritized experience reply, and integrate curriculum learning techniques to further enhance our performance. Finally, both qualitative and quantitative results show that the map-based motion planner outperforms other related DRL-based methods in multiple indicators in simulation environments and is easy to be deployed to a robotic platform.

References

  • [1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pp. 41–48. Cited by: §III-C.
  • [2] G. Chen, G. Cui, Z. Jin, F. Wu, and X. Chen (2018) Accurate intrinsic and extrinsic calibration of rgb-d cameras with gp-based depth correction. IEEE Sensors Journal 19 (7), pp. 2685–2694. Cited by: §I.
  • [3] Y. F. Chen, M. Everett, M. Liu, and J. P. How (2017) Socially aware motion planning with deep reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1343–1350. Cited by: §I.
  • [4] A. Giusti, J. Guzzi, D. C. Cireşan, F. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, et al. (2015) A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters 1 (2), pp. 661–667. Cited by: §I.
  • [5] G. Kahn, A. Villaflor, B. Ding, P. Abbeel, and S. Levine (2018) Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §I, §I.
  • [6] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-A.
  • [7] N. Koenig and A. Howard (2004)

    Design and use paradigms for gazebo, an open-source multi-robot simulator

    .
    In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, pp. 2149–2154. Cited by: §III-C.
  • [8] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research 37 (4-5), pp. 421–436. Cited by: §I.
  • [9] Y. Liu, A. Xu, and Z. Chen (2018)

    Map-based deep imitation learning for obstacle avoidance

    .
    In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8644–8649. Cited by: §I.
  • [10] P. Long, T. Fanl, X. Liao, W. Liu, H. Zhang, and J. Pan (2018) Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 6252–6259. Cited by: §I, §IV-B2.
  • [11] D. V. Lu, D. Hershberger, and W. D. Smart (2014) Layered costmaps for context-sensitive navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 709–715. Cited by: §III-B1.
  • [12] M. Mohanan and A. Salgoankar (2018) A survey of robotic motion planning in dynamic environments. Robotics and Autonomous Systems 100, pp. 171–185. Cited by: §I.
  • [13] M. Pfeiffer, M. Schaeuble, J. Nieto, R. Siegwart, and C. Cadena (2017) From perception to decision: a data-driven approach to end-to-end motion planning for autonomous ground robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 1527–1533. Cited by: §I.
  • [14] C. Rösmann, F. Hoffmann, and T. Bertram (2017) Integrated online trajectory planning and optimization in distinctive topologies. Robotics and Autonomous Systems 88, pp. 142–153. Cited by: §I.
  • [15] T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2016) Prioritized experience replay. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §III-B.
  • [16] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362 (6419), pp. 1140–1144. Cited by: §I.
  • [17] L. Tai, S. Li, and M. Liu (2016) A deep-network solution towards model-less obstacle avoidance. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2759–2764. Cited by: §I.
  • [18] L. Tai, G. Paolo, and M. Liu (2017) Virtual-to-real deep reinforcement learning: continuous control of mobile robots for mapless navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 31–36. Cited by: §I.
  • [19] I. Ulrich and J. Borenstein (2000) VFH/sup*: local obstacle avoidance with look-ahead verification. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Vol. 3, pp. 2505–2511. Cited by: §IV-B3.
  • [20] H. Van Hasselt, A. Guez, and D. Silver (2016) Deep reinforcement learning with double q-learning. In

    Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI)

    ,
    Cited by: §III-B.
  • [21] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), pp. 350–354. Cited by: §I.
  • [22] Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas (2016) Dueling network architectures for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning (ICML), pp. 1995–2003. Cited by: §III-B.
  • [23] K. Wu, M. Abolfazli Esfahani, S. Yuan, and H. Wang (2018) Learn to steer through deep reinforcement learning. Sensors 18 (11), pp. 3650. Cited by: §I.
  • [24] L. Xie, S. Wang, S. Rosa, A. Markham, and N. Trigoni (2018) Learning with training wheels: speeding up training with a simple controller for deep reinforcement learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 6276–6283. Cited by: §I, §I.
  • [25] J. Zhang, J. T. Springenberg, J. Boedecker, and W. Burgard (2017) Deep reinforcement learning with successor features for navigation across similar environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2371–2378. Cited by: §I.
  • [26] W. Zhang, S. Wei, Y. Teng, J. Zhang, X. Wang, and Z. Yan (2017) Dynamic obstacle avoidance for unmanned underwater vehicles based on an improved velocity obstacle method. Sensors 17 (12), pp. 2742. Cited by: §I.
  • [27] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager (2017) Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells. IEEE Robotics and Automation Letters 2 (2), pp. 1047–1054. Cited by: §I.