With the rise of social robots in our society, modelling interaction with people in real world scenarios is fundamental. Environments where robots are capable of navigating such as hospitals, hotels, restaurants, train stations or airports have been subject of research studies. In (Sasaki and Nitta, 2017), a robot called Peacock was deployed 120 hours in a museum. The sessions were carried out two times per week for a lapse of ten months letting visitors move freely around the robot during autonomous navigation. The authors conclude that the approach performed well in uncongested scenarios, with limitations in crowded situations or narrow aisles. A similar example are the robots deployed in restaurants serving food in countries such as China or Japan, which typically use line followers for trajectory generation (Mishraa et al., 2018). Recently, a ROS based robotic waiter was presented in (Cheong et al., 2016). The preliminary tests shown that the platform was able to dock at a target table and roll out the serving tray successfully.
Research studies in Human-Robot Interaction (HRI) have shown that robot behaviour has a deep impact on their perceived intelligence, especially in the case of service robots navigating in public spaces (Althaus et al., 2004)
. In this regard, several methods have been proposed for navigating on dynamic and uncertain environments. In summary, they can be classified in two groups: model-based and learning-based methods.
The major exponents of model-based methods rely on social psychology and cognitive sciences to generate human-like paths for robot navigation. One of the most relevant approaches is the Social Force Model (SFM) (Helbing and Molnar, 1995), proposed to model pedestrian’s behaviour whose motion is influenced by other pedestrians by means of repulsive forces. Many studies have implemented this method but also produced several variations (Zanlungo et al., 2011), later applied to real world environment navigation where robots can avoid or go along with people (Ferrer et al., 2013) (Ratsamee et al., 2013). However, these works show limitations such as the need for parameter calibration in different robots or the need of additional sensors for pedestrian tracking. Also, model-based methods are based on geometric relations, but it is still unclear if pedestrians always follow such models.
In contrast, learning-based methods use policies for defining human-like behaviours, which are usually learnt from human demonstrations by matching feature statistics about pedestrians. These methods apply machine learning techniques such as Inverse Reinforcement Learning (IRL) to model the factors that motivate people’s actions instead of the actions themselves. An experimental comparison of features and learning algorithms based on IRL is presented in(Vasquez et al., 2014), where the authors conclude that is more effective to invest effort on designing features rather than on learning algorithms. More recent approaches include the use of deep reinforcement learning in order to set restrictions (Chen et al., 2017) (i.e. passing at the left side of the people) instead of learning the features that describe human paths.
Both groups of methods described above focus on the motion of the robot but there are situations where more complex social behaviors are needed. As pointed by (Trinh et al., 2015), polite navigation is an important requirement for social acceptance. According to this, an approach for management of deadlock situations at narrow passages is presented, where the robot lets the conflicting person pass and waits in a non-disturbing waiting position. A more recent work (Vega et al., 2018) proposes a navigation planning showing two scenarios: in the first one, the robot asks for collaboration to enter a room. In the second one, the robot asks for permission to navigate between two people which are talking. However, results presented are shown in simulation.
In the present work, we focus on a functional implementation and deployment of a similar scenario proposed in (Vega et al., 2018). These kinds of situations challenge existing planners, yet resulting in a robot freeze while the path is blocked or re-generating a new alternative path, if possible. Instead, the most natural human behaviour is initiating a dialog to ask for permission to pass. In this regard, we propose the use of a high level situation assessment which is composed by a navigation module and a people perception module. This framework stops the navigation when a deadlock is generated by the situation previously described and triggers an episodic interaction before re-initiating the navigation once again. As a result, this work is the first of its kind in successfully incorporating human interaction as part of the navigation flow and deploy it on a robotic platform.
The proposed methodology makes use of existing open source software to provide navigation and people perception capabilities. Because of this, it can be deployed on any robotic system able to run ROS and OpenCV version 3.3+. In this work, we have used the humanoid robot Pepper as deployment platform. Nevertheless, due to the limited computational power of Pepper, an external CPU (Intel Core i7-3770 CPU 3.40GHz x 8) with Ubuntu 16.04 LTS has been used to run the navigation and people perception modules.
2.1. Robot Platform
Pepper (version ¡ 1.8) is an omnidirectional wheeled humanoid robot 1.21 m tall. With 17 joints and 20 degrees of freedom (DoF) kinematic configuration and edgeless design, it is suitable for social and safe Human-Robot Interaction(Pandey and Gelin, 2018). The platform is equipped with a large variety of sensors and actuators that ensure safe navigation; on its base, it incorporates 3 lasers pointing forward and to the sides, with a range of 60°. It also includes 2 sonars, which are pointing forward and backwards with a vertical and horizontal range of 60°, and 3 bumpers for object collision detection. Finally, the robot is equipped with an Atom E3845 Quad-core processor, 4 microphones, two 2D cameras and a depth camera in the head.
The navigation module is essentially composed by the ROS navigation stack including the packages Adaptive Monte Carlo Localization (amcl) and Dynamic Window Approach (dwa_local_planner). Due to the low detection range of the Pepper lasers, a similar approach previously presented by (Perera et al., 2017) and (Suddrey et al., 2018) has been used in order to convert the depth image into virtual laser data, using the ROS package depthimage_to_laserscan. Then, the resulting map is generated using gmapping (laser-based SLAM) and post-processed for improving its reliability. Similarly, a sketched map of the scenario was also used to improve the performance. The navigation uses a global planner with inflated obstacles and a local costmap with observations from the virtual laser data. In this way, Pepper will stay away from possible collisions benefiting the motion of the robot. Additionally, some necessary components111https://bitbucket.org/account/user/pepper_qut/projects/PN to orchestrate the navigation have been incorporated (Suddrey et al., 2018) and presented in the next section.
The planning task is carried out by two main components; the global planner, which is in charge of providing path trajectory from the initial location till the target goal, and the local planner, responsible for obstacle avoidance in a close range while keeping an optimal distance to the global path. In case of failure, two recovery behaviors are implemented allowing the costmap to be cleaned and the robot rotate in place to find a new global path. However, we introduce a preliminary step where we expose the specific time the local planner could not find a valid plan in order to trigger the higher level situation assessment that allows the identification of deadlock situations generated by humans.
2.3. People Perception
Current state of the art algorithms based on Deep Learning such as YOLO (Redmon et al., 2016) offer a fast object detection at a high computational cost. However, based on our computational restrictions, an accurate approach suitable for CPU processing was required. For this reason, the OpenCV dnn module composed of a MobileNet-Single Shot Detector (SSD) (Howard et al., 2017) trained in Caffe framework was chosen. This implementation uses an RGB image as input and it is able to detect up to 20 different classes, humans among them, despite occlusions and from different points of view. Once the person is detected, and with the aim to decrease the computational cost, a lightweight correlation tracker implemented on Dlib library (Danelljan et al., 2014) is applied. The module publishes a message every time a bounding box is bigger than an empirically predefined threshold (80 pixels width) in order to filter targets located far away from the field of interaction (see figure 2).
3. Situation Assessment Framework
The high level situation assessment framework proposed makes use of the Navigation and People Perception modules to orchestrate the different transitions between them. As pointed earlier, detecting a failure in the local planner is key to trigger an episodic interaction with the human blocking, but also considering that this event can take place due to several reasons (i.e. a failure due to loss of expected rate in sensor data acquisition). In our case, we are not only interested to know if there is a failure due to an obstacle in front of the robot, but also if the robot has detected a person as a source. At the same time, these changes are used in the global plan in order to know if a newly generated trajectory is better than the original one.
The navigation cycle is initiated by the NAOqi application Navigation App that loads the available destinations from a json file into the Location server. The same application is in charge of interfacing with ROS through ActionLib client ROS package in order to generate the robot motion.
As in any complex system, the inputs of the sensors and the outputs from the Navigation and People Perception modules are obtained asynchronously, but sequentially evaluated in order to stop or start the navigation and the user interaction. When the local plan has failed, the data retrieved from the virtual laser is grouped into clusters using MeanShift from the scikit-learn
library with a quantile value of 0.20 to estimate the bandwidth; if the mean of any of the clusters is smaller than a specified distance (¡3 meters due to the depth sensor limitation range), the People Perception module engages duringt seconds in order to detect the human. In addition, if there is an existent plan generated by the global planner, a comparison with the initial trajectory is performed using the difference in path lengths.
In consequence, if the newly generated path is shorter or equal than the original one, considering a threshold or level of consideration, the robot still can take it avoiding the interaction. Such measurement becomes a very powerful tool in order to define constraints that respect specific social contexts and groups. For instance, if a physically impaired person is detected or the robot is deployed on a cluttered environment, this would allow to modulate the robot’s behaviour or eventually exclude the interaction.
Once all the conditions described above are satisfied, the navigation is stopped and the robot starts the verbal interaction. In case the path remains blocked, the robot would wait 5 seconds and ask for permission a second time. Alternatively, the system can freeze till the path is no longer blocked.
Three cases where the robot is navigating in a hallway (figure 4) are presented. The first one shows a free path and the other two people blocking it. Figure (a)a shows the robot taking the shortest path due to the absence of any type of obstacles. In contrast, when the path is blocked potentially due to a human (see figure (d)d), the robot changes its initial plan and takes a longer path to reach its goal. In figure (b)b, the robot detects that the path is blocked by humans and interacts with them in order to free it. Finally, it will take a new plan that is no longer than the initial one (figure (e)e).
During our preliminary trials, it is worth mentioning that when the robot asked for permission to pass (figure (c)c), one of the people standing moved out while the other remain blocking the potential trajectory. Then, an unexpected human-robot collaboration took place: the person aware of the intention of the robot asked the other one to free the way (see figure (f)f).
5. Conclusions and Future Work
In the present work, we have successfully incorporated HRI as part of the navigation strategy by performing a situation assessment on a human blocking context. In addition, the system presented has been deployed on a humanoid robot in a real world scenario. However, several limitations have been identified during the preliminary tests. First of all, since the computation is not on-board, the segmentation of the network penalizes the performance of the people perception module. Secondly, the situation assessment module working principle is not embedded into the navigation planner; with an integration to the ROS local planner, a consolidated approach could be provided.
In terms of future work, further implementation needs to be done in order to include the use of social and individual situations to modulate the specified level of consideration. In this way, we will be able to personalize the decision making for the interruption and asking permission to pass in a blocked path.
Acknowledgements.This work has received funding from the European Union’s Horizon 2020 framework programme for research and innovation under the Industrial Leadership Agreement (ICT) No. 779942 (CROWDBOT).
- Althaus et al. (2004) Philipp Althaus, Hiroshi Ishiguro, Takayuki Kanda, Takahiro Miyashita, and Henrik I Christensen. 2004. Navigation for human-robot interaction tasks. In IEEE International Conference on Robotics and Automation, Vol. 2. IEEE; 1999, 1894–1900.
- Chen et al. (2017) Yu Fan Chen, Michael Everett, Miao Liu, and Jonathan P How. 2017. Socially aware motion planning with deep reinforcement learning. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEE, 1343–1350.
- Cheong et al. (2016) A Cheong, MWS Lau, E Foo, J Hedley, and Ju Wen Bo. 2016. Development of a robotic waiter system. IFAC-PapersOnLine 49, 21 (2016), 681–686.
- Danelljan et al. (2014) Martin Danelljan, Gustav Häger, Fahad Khan, and Michael Felsberg. 2014. Accurate scale estimation for robust visual tracking. In British Machine Vision Conference, Nottingham, September 1-5, 2014. BMVA Press.
- Ferrer et al. (2013) Gonzalo Ferrer, Anais Garrell, and Alberto Sanfeliu. 2013. Robot companion: A social-force based approach with human awareness-navigation in crowded environments. In Intelligent robots and systems (IROS), 2013 IEEE/RSJ international conference on. IEEE, 1688–1694.
- Helbing and Molnar (1995) Dirk Helbing and Peter Molnar. 1995. Social force model for pedestrian dynamics. Physical review E 51, 5 (1995), 4282.
- Howard et al. (2017) Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
- Mishraa et al. (2018) Neelima Mishraa, Dinesh Goyal, and Ashish Dutt Sharma. 2018. Issues in Existing Robotic Service in Restaurants and Hotels. (2018).
- Pandey and Gelin (2018) Amit K. Pandey and Rodolphe Gelin. 2018. A mass-produced sociable humanoid robot: pepper: the first machine of its kind. IEEE Robotics & Automation Magazine 99 (2018), 1–1.
- Perera et al. (2017) Vittorio Perera, Tiago Pereira, Jonathan Connell, and Manuela Veloso. 2017. Setting up pepper for autonomous navigation and personalized interaction with users. arXiv preprint arXiv:1704.04797 (2017).
- Ratsamee et al. (2013) Photchara Ratsamee, Yasushi Mae, Kenichi Ohara, Tomohito Takubo, and Tatsuo Arai. 2013. Human–robot collision avoidance using a modified social force model with body pose and face orientation. International Journal of Humanoid Robotics 10, 01 (2013), 1350008.
- Redmon et al. (2016) Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In
- Sasaki and Nitta (2017) Yoko Sasaki and Jirou Nitta. 2017. Long-term demonstration experiment of autonomous mobile robot in a science museum. In Robotics and Intelligent Sensors (IRIS), 2017 IEEE International Symposium on. IEEE, 304–310.
- Suddrey et al. (2018) Gavin Suddrey, Adam Jacobson, and Belinda Ward. 2018. Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory. arXiv preprint arXiv:1804.03288 (2018).
- Trinh et al. (2015) Thanh Q Trinh, Christof Schroeter, Jens Kessler, and Horst-Michael Gross. 2015. ”Go Ahead, Please”: Recognition and Resolution of Conflict Situations in Narrow Passages for Polite Mobile Robot Navigation. In International Conference on Social Robotics. Springer, 643–653.
- Vasquez et al. (2014) Dizan Vasquez, Billy Okal, and Kai Arras. 2014. Inverse reinforcement learning algorithms and features for robot navigation in crowds: an experimental comparison. In IEEE-RSJ Int. Conf. on Intelligent Robots and Systems. 1341–1346.
- Vega et al. (2018) Araceli Vega, Luis J Manso, Ramón Cintas, and Pedro Núñez. 2018. Planning Human-Robot Interaction for Social Navigation in Crowded Environments. In Workshop of Physical Agents. Springer, 195–208.
- Zanlungo et al. (2011) Francesco Zanlungo, Tetsushi Ikeda, and Takayuki Kanda. 2011. Social force model with explicit collision prediction. EPL (Europhysics Letters) 93, 6 (2011).