LBGP: Learning Based Goal Planning for Autonomous Following in Front

11/05/2020 ∙ by Payam Nikdel, et al. ∙ Simon Fraser University 0

This paper investigates a hybrid solution which combines deep reinforcement learning (RL) and classical trajectory planning for the following in front application. Here, an autonomous robot aims to stay ahead of a person as the person freely walks around. Following in front is a challenging problem as the user's intended trajectory is unknown and needs to be estimated, explicitly or implicitly, by the robot. In addition, the robot needs to find a feasible way to safely navigate ahead of human trajectory. Our deep RL module implicitly estimates human trajectory and produces short-term navigational goals to guide the robot. These goals are used by a trajectory planner to smoothly navigate the robot to the short-term goals, and eventually in front of the user. We employ curriculum learning in the deep RL module to efficiently achieve a high return. Our system outperforms the state-of-the-art in following ahead and is more reliable compared to end-to-end alternatives in both the simulation and real world experiments. In contrast to a pure deep RL approach, we demonstrate zero-shot transfer of the trained policy from simulation to the real world.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In many applications of human robot interaction a robot needs to stay near a user. These include capturing the video of physical activities or monitoring elderly people. Variants of following the user include following from behind, following in front, and following side by side [1, 2]. Although following from behind is well studied, it can be much more challenging to stay ahead of a user [3]. To follow a user from behind, one can use a proportional integral derivative (PID) controller to keep the person at the center and add a separate PID controller to maintain person-robot distance [4]. In contrast, for following in front, robot needs to explicitly or implicitly predict the future trajectory of the person and navigate to a point of that trajectory while maintaining a safe distance. Behavioral experiments suggest that in following behind scenarios, the user frequently looks behind out of curiosity or to ensure the robot is within a safe distance [5]. Conversely, following in front can assist a person in different applications. Consider an autonomous shopping cart, self-driving luggage or autonomous guide dog; in all these applications, it is best if the robot is in front of the user. The user not only feels safer, but also can interact with the robot more conveniently.

In recent years, a variety of research shows the capability of deep reinforcement learning (RL) to solve hard game problems [6, 7]. Applying it to robotics problems can help to address navigational tasks while considering the user intents [8, 9]. Deep RL can implicitly account for robot dynamics and enable a continuous interaction of the robot and its environment. In the staying in front problem, deep RL can also implicitly predict a person’s future trajectory, and continuously update the predictions to provide a smooth real-time experience for the user.

Fig. 1: A mobile robot following-ahead of a user. The robot must predict the user’s trajectory to stay in the correct position. In each step, robot considers previous states of the system to generate a goal (blue dot). Then a trajectory planner navigates the robot towards the goal (green line).

In this paper, we propose the Learning Based Goal Planning (LBGP) approach to address the problem of staying in front of the user. LBGP is a hybrid approach that uses the combination of deep RL and classical trajectory planners (see Figure 1). Our results show that combining deep RL with classical methods can greatly improve performance while maintaining its safety. We also show the benefits of using curriculum learning to train the agent on increasingly challenging human motions. Compared to training without a curriculum, our method trains the policy more efficiently while achieving a higher return. To generalize our model to unseen and real-word inputs, we add a Gaussian noise to our observations.

We demonstrate favorable results in simulation and real world experiments compared to previous work. Our ablation studies show the benefits of our hybrid approach and curriculum. In particular, we show the effectiveness of our hybrid approach zero-shot transfer of the policy trained in simulation to the real world. Example of our system can be find in the supplementary video 111https://youtu.be/XSOUdPFPMmA. In summary, our main contributions are as follows:

  • We combine a robotics trajectory planner with deep RL to improve the safety and generalizability of our system.

  • We use curriculum learning to reduce the training time while improving the final return

  • By evaluating our system in the simulation using a Clearpath Jackal and in the real world using a Turtlebot 2 robots, we show that our system can be more reliable and efficient for front-following in compare to an End-to-End or a hand-crafted approach

  • We demonstrate that the policy trained using our method can directly transfer to the real world without any re-training.

Ii Related Work

Person following has been studied for ground [3, 10, 11], aerial [12, 13] or even underwater environments [14, 15]. Following from behind is the dominant scenario in these studies. Classical methods have divided the person following problem to a number of sub-modules: user localization, path finding and trajectory tracking [4, 16]. Learning navigational tasks directly from sensor inputs with End-to-End approaches has gained popularity in recent years [10]. These techniques involve learning the task in simulation first and then possibly transfer the policy to the real word or generalize the policy to unseen environments [17]. For a comprehensive review of autonomous person following, we encourage the reader to refer to [18].

Ii-a Following in Front

Few papers had studied the interesting problem of following in front. Moustris et al. [19] proposed a front-following system which incorporates a local dynamic planner along with a user intention recognition algorithm based on the user’s relative offset from the middle of robot’s view. Ho et al. [1]

assumed a nonholomonic human model and used a Kalman filter to estimate human linear and angular velocity. They designed the robot motion planner such that it stays ahead of the user but the robot sometimes falls behind in specific situations like a T-junction. In a recent work, Nikdel et al. presented an Extended Kalman Filter (EKF) approach with joint 2D LiDAR and a fish-eye camera to detect and track the person

[3]. Their EKF model assumes a linear motion model. The EKF predicted position of the person is corrected using a human motion model that consider obstacles. To the best of our knowledge, this system is the state-of-the-art for the staying in front task for Unmanned Ground Vehicles (UGVs). We compare our LBGP with the result of this approach in an obstacle-free environment. Our approach shows a notable improvement.

Ii-B Reinforcement Learning

To the best of our knowledge, the staying in front task has not been explored using an RL framework. However, several studies used deep RL for related navigational tasks. Dewantara et al. proposed a guiding behaviour that optimize parameters of a social force model using Q-Learning [20]. Recently, Chen et al. presented a relational graph deep RL approach for robotic crowd navigation [8]

. Using these relational graph, they encoded higher-order interactions between agents and used it to anticipate future. Curriculum learning approach has been used to increase the efficiency of RL training. Narvekar and Stone formulated a curriculum sequencing problem as a Markov Decision Process

[21]. They show how curriculum learning can reduce training time. Kulhanek et al. presented another RL based navigation agent [9] which learns to navigate in an environment using only the raw images. They proposed to pre-train the network by transferring the learned policies from one environment to another and gradually increased the environment complexity. We deploy a similar approach by gradually increasing the difficulty of our person motion model. Bansal et al. proposed a navigational framework for combining optimal control and learning [22]. Their learning-based perception module produces a series of way-points that guides a robot toward the goal. One fundamental difference between our LBGT system and way point-based navigation approaches such as [22, 23] is the need to predict human trajectory which can make the task more challenging in comparison with navigating to a known goal.

Fig. 2: Our relative coordinates system

Iii Problem Setup

In this work, we study the problem of keeping an autonomous robot in front of a walking person. We assume an obstacle free environment in which the robot should avoid collision with the human. We represent the global state of the human and robot with , , respectively. is the position, is the orientation, is the linear velocity and is the angular velocity at time t.

To make our approach transferable to real world and avoid over-fitting we use a relative state of the robot with respect to human, denoted :

(1)

For the purpose of calculating rewards, we define as the person-robot angle.

We also define a similar notation for the th previous state of the human relative to their current state at time t:

(2)

As part of the observation, we consider a history of coordinates for both the robot and human. These coordinates are all respective to the latest position of the human. This relative system is visualized in Figure 2.

Iv Method

Our key insight in this paper is to combine an RL module with a classical trajectory planner. The agent uses our implementation of Deep Deterministic Policy Gradient (D4PG) [24] algorithm to generate a short-term navigational goal. A Time Elasic Bands (TEB) motion planner is used to navigate toward this goal, while treating the person as a dynamic obstacle. Our approach differs from typical policies trained with RL, which directly output an agent’s actions.

Iv-a Observations and Navigational Goals

The observation is a stack of robot and human relative states , with being the current time step (see equations (1) and (2)). We stack states up to the last 10 frames (at 5 FPS).

The numbers are continuous and will be scaled to . In simulation, we capture all the variables from Gazebo simulator. In real world, it can be obtained by a motion capture system or human detection algorithms (e.g. YOLOv2 [25]) with RGB-D inputs (this approach was previously used in [3]). To improve the transferability of our approach to the real world, we add Gaussian noise to the observations in simulation.

The output of our policy network is a target position relative to the person. This position is a short-term navigational goal based on the estimated path of the user. We feed this output to the TEB local planner to navigate the robot in a smooth trajectory.

Fig. 3: Reward based on the robot’s relative position to the person. Increasing from black (-1) to white (+1).

Iv-B Reward

We define the reward function such that the agent receives a higher reward (R) if it stays in front of the person at a desired distance of 1.5 m and negative reward if it is far away, too close or behind the person. The reward is scaled to . Figure 3 shows the reward function based on relative coordinates of the robot to human. The agent reward is defined as follows:

where is the distance between the robot and the person, and

is the angle between the person-robot vector and the person-heading vector (person-robot angle, in short). We terminate the episode if the agent is too close (

m) or far away ( m).

Iv-C Policy Training Environment

Our LBGP system is implemented in ROS [26] and trained in the Gazebo robot simulator [27]. We use a Turtlebot 3 burger robot as the person and a Clearpath Jackal robot as the robot. The person is controlled using our person motion model. We design a world in the Gazebo simulator with four replicas of an environment each containing one learning agent. Three of agents explore the environment while the last one exploits the policy. This setup is arranged to mitigate the exploration exploitation trade-off. The simultaneously collected exposures are added to replay buffer to update the model weights.

Iv-D Curriculum Learning

To improve learning efficiency, we employ curriculum learning to train the agent in a series of tasks with increasing difficulty. These tasks are defined based on the human trajectory. We start with a straight line and move to more difficult trajectories as we go further in the training. In our curriculum, there are four difficulty levels: straight, circles, smoothed curves and simulated human trajectories explained below (see Figure 4). At each difficulty level, the robot is randomly spawned at positions between 1 to 2.5 meters away from the person with uniformly random orientations. The details of each difficulty level is elaborated below.

Iv-D1 Straight

The person moves with an initial random linear velocity throughout the episode.

Iv-D2 Circles

The person moves in a circle with a different radius each time. We create the circular motion by selecting a random initial linear velocity in and a random angular velocity in .

Iv-D3 Smoothed curves

The person moves in random curves generated by following linear velocities and angular velocities :

where are random numbers between and respectively.

Iv-D4 Simulated trajectories

We first arbitrarily “draw” trajectories by moving a robot using a joystick in Gazebo to cover the space while recording robot coordinate points. The total length of all trajectories is roughly 50 meters.

During training, the person starts at a random point and tracks the above trajectories using a proportional integral derivative (PID) controller. To add variety to the data, we use 10 different environments and add the reverse of each trajectory to the library of trajectories as well.

Fig. 4: Visualization of person motion model. From left to write: moving straight, in different circles, in smoothed curves and using annotated simulated path of a human.

V Simulation Experiment

In this section, we present our experiments in simulation. We compare LBGP (our system) with two baselines: the latest Hand Crafted Following Ahead (HC) method in [3] and an End-to-End learning Following Ahead (E2E) approach. HC system exploits EKF to predict the position of the user and then navigates to a point ahead of the predicted position using a trajectory planner. For sake of consistency, we use the same TEB motion planner as in HC. For E2E approach, we use the same D4PG implementation with curriculum leaning, but instead of a navigational goal, the policy directly outputs the robot’s linear and angular velocities scaled to m/s and rad/s, respectively.

We conduct three experiments with different human trajectories. In each experiment, we report the mean person-robot angle , the mean robot-user distance and the episode accumulated reward. The results of all the three experiments are included in Table I. In all experiments, the robot has no prior knowledge of the planned trajectory of the human.

V-a Straight

Human Approach Distance Orientation Reward
Trajectory mean std mean std
Straight
ahead
LBGP
HC
E2E
Straight
behind
LBGP
HC
E2E
Turning
ahead
LBGP
HC
E2E
Turning
behind
LBGP
HC
E2E
Turning
inside
LBGP
HC
E2E
Turning
outside
LBGP
HC
E2E
Trajectory
one
LBGP
HC
E2E
Trajectory
two
LBGP
HC
E2E
Trajectory
three
LBGP
HC
E2E
TABLE I: Comparison of our systems versus two baselines for all simulation trajectories.

The first experiment conducted on straight human motion trajectory to compare the behaviour of the three methods. The human simply start moving forward with a constant linear velocity of 0.6 m/s. We spawn the robot relative to person in two initial settings, Ahead: (m, ) and Behind: (m, ).

We compare our results with HC and E2E (Table I). For the Ahead setting, HC achieves the highest episode reward. HC can achieve a better results as the incorporated EKF relies on linearity of human motion and it can optimally follow the straight line. In LBGP training, we apply Gaussian noise thus the robot may slightly deviate to the sides. In the Behind setting, our approach achieves the highest reward as it has learned to keep a safe distance with the human by setting further navigational goals for TEB compared to HC.

V-B Turning

We assess different approaches with turning trajectories. In this case, the person moves with a linear velocity of 0.3 m/s and angular velocity of 0.3 rad/s. To cover a large variety of initial conditions, we evaluate four positions of the robot relative to person, Ahead: (m, ), Behind: (m, ), Ahead-inside-the-turn: (m, ) or Ahead-outside-the-turn: (m, ). The result of this experiment shows that our LBGP achieves the highest reward in all the settings (Table I).

V-C Simulated trajectories

We designed three simulated trajectories to further evaluate our system. Similar to the training phase, we employ PID controllers for the simulated human to follow totally unseen trajectories, and the learning agent attempts to stay in front of the simulated human. Figure 5 shows the robot’s trajectories corresponding to the three different human trajectories. In this experiment, we can see a more noticeable difference in performance of LBGP compared to both baselines (Table I). Compared to E2E, our method likely performs better as the usage of a trajectory planner abstracts navigation to predicting a goal, while the E2E method needs to implicitly learn the dynamics of the system.

Learning accurate dynamics can be challenging and may expose the E2E to over-fitting. Our method, LBGP, also outperforms the HC system, since LBGP predicts a goal based on a history of the human trajectory as opposed to using a linear human motion model as in HC.

abstract lower level control by directly predicting

Trajectory one Trajectory two Trajectory three

 

LBGP
E2E
HC
Fig. 5: Visualize the trajectory of robot (arrows) and human (triangle) during the simulated trajectory experiment for our system (LBGP) and two baselines, HC and E2E.

V-D Ablation Study

We perform an ablation study to evaluate the effectiveness of different modules and training procedures of our LBGP approach. We compare the performance of our approach to two variants of our it: 1) without curriculum learning (LBGP-no-curriculum), and 2) without a trajectory planner (E2E, same as described in Section V). As shown in Figure 6, both planner-no-curriculum and E2E have slow learning curves and reach a lower discounted cumulative reward bound compared to LBGP, our proposed method.

Fig. 6:

The shaded area represents half a standard deviation.

Vi real world Experiment

We test LBGP on a TurtleBot 2 hardware testbed, and evaluate the transferability of the policy trained in simulation, using our approach, to the real world. We also compare our method’s sim2real ability with the two baselines, HC and E2E, defined on Section V. We performed three experiments each with 4 different initial relative states. In short, our approach demonstrates successful zero-shot sim2real transfer of the policy.

To keep the experiments consistent between different approaches, all the initial states of human and robot along with the trajectory of human are marked on the ground with color tapes. In each experiment, we report the total discounted cumulative reward, the mean person-robot angle () and the mean person-robot distance () as a measures of the follow-ahead quality. To make the accumulated reward a fair evaluation, we keep a constant number of time step for each setting. We use a motion capture system to record the robot and person’s states. For all the experiments we use the policy we trained in simulation with no changes. In each setting, we terminated the experiment as soon as the robot hits the person or gets more than three meters away from the person. As with the simulations, in every real world experiment, the robot has no knowledge of the planned trajectory of the human.

Vi-a Straight Trajectory

In this experiment, the initial positions of robot relative to human are: Ahead: (m, ), Ahead-right: (m, ), Ahead-left: (m, ) and Behind: (m, ). In each setting, the person intends to navigate with a constant forward speed toward a goal located at 7 meters of its initial position. The four settings along with the result of the Straight experiment is reported in Table II. In this experiment the EKF model of HC can correctly predict the human trajectory and it achieves the highest reward only for the Ahead setting. Among all the other settings, our LBGP method achieves the highest performance. It is likely because the policy in LBGP is trained to keep the safety distance with the human. E2E failed to accomplish the following ahead task due to collision with person (Ahead and Ahead-right settings) or drifting away in reverse direction (Behind setting). Likely, E2E learns to navigate only with the specific simulated robot dynamics and unable to generalize to a new robot dynamics in the real world experiments.

Human Approach Distance Orientation Reward
Trajectory mean std mean std
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E Failed Failed Failed
TABLE II: Comparison of our systems versus two baselines for straight trajectory.

Vi-B S shaped Trajectory

In the second experiment, we evaluate our system for an S shaped trajectory. The initial relative position of robot is exactly similar to the Straight trajectory. The user deliberately follows a S shape path for all the settings. As shown in Table III, LBGP achieves the highest reward in all four settings. When the person travels along an S shaped trajectory, it is important to consider a history of the person to predict its future trajectory and a simple EKF as in HC cannot correctly predict the complexity of this motion. Figure 7 visualizes examples of the robot and human trajectories for Ahead-right and Behind settings. Similar to the Straight experiment, E2E failed three out of four settings by colliding with the user.

Human Approach Distance Orientation Reward
Trajectory mean std mean std
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E Failed Failed Failed
TABLE III: Comparison of our systems versus two baselines for "S shape" trajectory.

Vi-C U-turn Trajectory

Lastly, we evaluate the LBGP when the person perform a U-turn. The initial positions of robot relative to human is: Ahead: (m, ), Ahead-left (m, ), Ahead-far-left (m, ) and Behind (m, ). Table IV shows the four settings along with the result of the U-turn experiment, and LBGP consistently accumulates the highest reward. For a challenging U-turn trajectory, it is important for the robot to “notice” these specific walking patterns and react spontaneously. This cannot be done in HC method as HC anticipate future based on the heading of the person. Examples of robot and human trajectories for Ahead-left and Behind settings is visualized in Figure 7. For instance, in the Ahead-left setting, LBGP predict the turn early and avoid getting far away from the person. On the other hand, E2E has trouble transferring the policy to the real world.

Human Approach Distance Orientation Reward
Trajectory mean std mean std
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E
LBGP
HC
E2E Failed Failed Failed
LBGP
HC
E2E
TABLE IV: Comparison of our systems versus two baselines for U-turn trajectory.
LBGP HC

 

Fig. 7: real world Examples: the robot (in arrows) and user (in triangles) trajectories is depicted. Row 1 and 2, S shape experiment in ahead-right and behind settings. Row 3 and 4: U-turn experiment in ahead-left and behind settings.

Vii Discussion

Vii-a Comparison to the Hand Crafted method

Our results show that our proposed learning-based system for following ahead outperform the HC method in both the simulation and real world. LBGP is able to create a complex model of environment with a better abstraction of human motion model as opposed to a linear EKF in the HC. Another advantage of RL is the large amount of training data that can be obtained in a simulated environment. This allows LBGP to better predict human trajectories (implicitly) compared to a hand-crafted method.

Vii-B Comparison to the End-to-End method

Although E2E achieves a comparable performance in simulation, it is unreliable in the real world. Using E2E, robot collided with the user in all the three real world experiments. We also saw robot shaking a lot when we use E2E. After investigating we identified the dynamics of the real world robot differs from the simulated one, which prevents E2E from extending the learned behaviour to the real world. In contrast, LBGP overcomes this model mismatch by abstracting away the dynamics using the TEB trajectory planner. This planner also helps our system to avoid any collision with person while staying in a safe distance.

Viii conclusion and future work

We propose LBGP, a follow ahead method that uses both reinforcement learning and point based navigation. We address the limitations of classical methods and end-to-end approaches by combining Deep RL and classical motion planner. Our implementation outperforms previous work in an obstacle free environment [3]. To train our deep RL model we used a curriculum learning that gradually increased the difficulty of the person motion model to learn a robust policy for front following. Our results show that using a planner improves the generalizablity and safety of the trained policy compared to an End-to-End method, and allows us to successfully perform zero-shot sim2real transfer.

In future work, we aim to improve the system by adding obstacles to the environment. We can use other source of information or active user interaction to improve our LBGP system, for instance, anticipate the user’s heading from gaze direction.

References

  • [1] D. M. Ho, J. S. Hu, and J. J. Wang, “Behavior control of the mobile robot for accompanying in front of a human,” in Advanced Intelligent Mechatronics (AIM), 2012 IEEE/ASME Int. Conf.   IEEE, July 2012, pp. 377–382.
  • [2] J. Hu, J.-J. Wang, and D. M. Ho, “Design of sensing system and anticipative behavior for human following of mobile robots,” IEEE Transactions on Industrial Electronics, vol. 61, pp. 1916–1927, 2014.
  • [3] P. Nikdel, R. Shrestha, and R. Vaughan, “The hands-free push-cart: Autonomous following in front by predicting user trajectory around obstacles,” 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–7, 2018.
  • [4] A. Leigh, J. Pineau, N. Olmedo, and H. Zhang, “Person tracking and following with 2D laser scanners,” in Robotics and Automation (ICRA), 2015 IEEE Int. Conf.   IEEE, 2015, pp. 726–733.
  • [5] E. J. Jung, B. J. Yi, and S. Yuta, “Control algorithms for a mobile robot tracking a human in front,” in Intelligent Robots and Systems, 2012 IEEE/RSJ Int. Conf.   IEEE, Oct 2012, pp. 2411–2416.
  • [6] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel et al., “A general reinforcement learning algorithm that masters chess, shogi, and go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018.
  • [7] C. Berner, G. Brockman, B. Chan, V. Cheung, P. Dębiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse et al., “Dota 2 with large scale deep reinforcement learning,” arXiv preprint arXiv:1912.06680, 2019.
  • [8] C. Chen, S. Hu, P. Nikdel, G. Mori, and M. Savva, “Relational graph learning for crowd navigation,” arXiv preprint arXiv:1909.13165, 2019.
  • [9] J. Kulhánek, E. Derner, T. de Bruin, and R. Babuška, “Vision-based navigation using deep reinforcement learning,” in 2019 European Conference on Mobile Robots (ECMR).   IEEE, 2019, pp. 1–8.
  • [10]

    J. M. Pierre, “End-to-end deep learning for robotic following,” in

    ICMSCE 2018, 2018.
  • [11] X. Wang, L. Zhang, D. Wang, and X. Hu, “Person detection, tracking and following using stereo camera,” in International Conference on Graphic and Image Processing, 2018.
  • [12] S. Huh, D. Shim, and J. Kim, “Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of uavs,” 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3158–3163, 2013.
  • [13] J. J. Lugo and A. Zell, “Framework for autonomous on-board navigation with the ar.drone,” Journal of Intelligent & Robotic Systems, vol. 73, pp. 401–412, 2013.
  • [14] M. Islam, M. Fulton, and J. Sattar, “Toward a generic diver-following algorithm: Balancing robustness and efficiency in deep visual detection,” IEEE Robotics and Automation Letters, vol. 4, pp. 113–120, 2019.
  • [15]

    S. M. Zadeh, A. Yazdani, K. Sammut, and D. Powers, “Online path planning for auv rendezvous in dynamic cluttered undersea environment using evolutionary algorithms,”

    Appl. Soft Comput., vol. 70, pp. 929–945, 2018.
  • [16] M. Wang, D. Su, L. Shi, Y. Liu, and J. V. Miró, “Real-time 3d human tracking for mobile robots with multisensors,” 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5081–5087, 2017.
  • [17] A. Goldhoorn, A. Garrell, R. Alquézar, and A. Sanfeliu, “Continuous real time pomcp to find-and-follow people by a humanoid service robot,” 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 741–747, 2014.
  • [18] M. J. Islam, J. Hong, and J. Sattar, “Person-following by autonomous robots: A categorical overview,” The International Journal of Robotics Research, vol. 38, pp. 1581 – 1618, 2019.
  • [19] G. Moustris and C. Tzafestas, “Intention-based front-following control for an intelligent robotic rollator in indoor environments,” 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7, 2016.
  • [20] B. S. B. Dewantara and J. Miura, “Generation of a socially aware behavior of a guide robot using reinforcement learning,” in 2016 International Electronics Symposium (IES), 2016, pp. 105–110.
  • [21] S. Narvekar and P. Stone, “Learning curriculum policies for reinforcement learning,” in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems.   International Foundation for Autonomous Agents and Multiagent Systems, 2019, pp. 25–33.
  • [22] S. Bansal, V. Tolani, S. Gupta, J. Malik, and C. Tomlin, “Combining optimal control and learning for visual navigation in novel environments,” in Conference on Robot Learning.   PMLR, 2020, pp. 420–429.
  • [23] A. Li, S. Bansal, G. Giovanis, V. Tolani, C. Tomlin, and M. Chen, “Generating Robust Supervision for Learning-Based Visual Navigation Using Hamilton-Jacobi Reachability,” in Conference on Learning for Dynamics and Control, 2019.
  • [24] G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, A. Muldal, N. Heess, and T. Lillicrap, “Distributed distributional deterministic policy gradients,” arXiv preprint arXiv:1804.08617, 2018.
  • [25] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” arXiv preprint arXiv:1612.08242, 2016.
  • [26]

    M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “ROS: an open-source Robot Operating System,” in

    ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, 2009, p. 5.
  • [27] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, 2004, pp. 2149–2154 vol.3.