Pedestrian-Robot Interaction Experiments in an Exit Corridor

02/15/2018 ∙ by Zhuo Chen, et al. ∙ Stevens Institute of Technology 0

The study of human-robot interaction (HRI) has received increasing research attention for robot navigation in pedestrian crowds. In this paper, we present empirical study of pedestrian-robot interaction in an uni-directional exit corridor. We deploy a mobile robot moving in a direction perpendicular to that of the pedestrian flow, and install a pedestrian motion tracking system to record the collective motion. We analyze both individual and collective motion of pedestrians, and measure the effect of the robot motion on the overall pedestrian flow. The experimental results show the effect of passive HRI, where the pedestrians' overall speed is slowed down in the presence of the robot, and the faster the robot moves, the lower the average pedestrian velocity becomes. Experiment results show qualitative consistency of the collective HRI effect with simulation results that was previously reported. The study can be used to guide future design of robot-assisted pedestrian evacuation algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the rapid advances in the development of autonomous mobile robot technologies, the applications of service and socially assistive robots have been woven into people’s daily life. As robots are performing tasks where human beings are involved, the problem of human-robot interaction (HRI) arises and has received considerable attention during the past decade. Understanding the effect of HRI on human’s behavior, especially in human and robot collective motion, is of great significance for robot control or decision-making. In one of the recent works, HRI is studied and considered for human-aware robot navigation, where traditional motion planning approach is amended to respect the effect of HRI on human behavior in the presence of mobile robot [1]. In addition to considering human being as moving obstacles, criteria such as human comfort [2, 3], natural motion [4, 5] and socially-adaptive motion [6, 7] are taken in to account for robot motion planning.

In the studies of robot-assisted guidance where multiple robots work collaboratively to escort a group of people, HRI is commonly modeled as repulsive or attractive force that is embedded in the social force model which governs human motion dynamics [8]. The robot performs either as a leader who directs a group of people towards the destination, or as a shepherd who regroups people escaping from the group formation. Thus, human’s motion is controlled through the effect of attractive and repulsive force from the leader and shepherd robot, respectively. In [9], the parameters of the social force based HRI model were found by analyzing the data obtained from real-world experiment using an interactive learning approach.

HRI is also studied for robot motion planning in large-scale human crowds. It has been found interesting how implicit interaction between humans and mobile robots affects human collective motion, so that desired human crowd behaviors can be achieved by adjusting the motion of robot in human crowd. Specially, inspired by self-organized phenomena in human collective motion, attempts have been made to improve human flow efficiency by introducing mobile robots in human crowds. The large-scale dynamics of human crowds can be modified as human crowds implicitly interact with the robot [10, 11]. Previous study [12] has found that placing a mobile robot in crossing human flows can help to create diagonal strip pattern in human flow as the human agents try to avoid the collision with the robot. The strip pattern is a desirable phenomenon for preventing congestion in crossing crowds. By adjusting their motion, the interacting robots can improve human flow efficiency under different flow densities. In our early work [13], simulations based on social force models were carried out to characterize HRI in an uni-directional exit corridor. An interacting robot was placed and programmed to move in a direction that is perpendicular to that of the flow direction to regulate the flow velocity in an evacuation situation in a uni-directional corridor.

The focus of this paper is to empirically study passive HRI in an exit corridor for the purpose of robot-assisted pedestrian flow regulation. We conduct real robot experiments with a group of pedestrians walking towards an exit door, and measure the effect of robot motion on the overall pedestrian flow. Our experimental results show that in an exit corridor environment, a robot moving in a direction perpendicular to that of the uni-directional pedestrian flow can slow down the uni-directional flow, and the faster the robot moves, the lower the average pedestrian velocity becomes. Furthermore, the effect of the robot on the pedestrian velocity is more significant when people walk at a faster speed. This empirical evidence is qualitatively consistent with simulation results reported in our early paper [13], and can be used for future development on robot-assisted pedestrian flow regulation.

The rest of the paper is organized as follows: Section II introduces the the passive pedestrian-robot interaction studied in the paper, and present the experimental setup and procedure. Section III presents the experimental results, and discuss comparisons with simulation results reported before. The conclusion and future work are presented in Section IV.

Ii Passive Pedestrian-Robot Interaction and Experimental Setup

Ii-a Passive Pedestrian-Robot Interaction

Fig. 1: The schematic diagram of pedestrian flow in an indoor corridor with robot-assisted regulation

We empirically study the regulation of human flow velocity in an exit corridor environment using an interacting robot. As shown in Fig. 1, a single interacting robot is added to the pedestrian flow, and moves in a direction perpendicular to that of the pedestrian flow with a predesignated speed. Since the robot’s movement is in the way of the pedestrians towards the exit, the pedestrians see the robot and avoid potential collisions by either changing the speed or alter the trajectory of their own. The pedestrian flow is regulated through the effect of passive HRI. That is, the robot moves in a pre-designated trajectory with a predesignated speed, and the pedestrians avoid potential collisions when passing through the robot thus passively interact with the robot. Note that we assume normal situations that the pedestrians can clearly see the robot and their movement behaviors are rational. We do not consider panic situations or nonrational behaviors.

Our hypothesis is that the speed of the robot moving in a direction perpendicular to that of a uni-directional pedestrian flow will affect the average speed of the pedestrians going through the corridor. The goal of the study is to test this hypothesis, measure the effect of the passive HRI by adjusting the robot’s speed on the pre-designated trajectory, and compare the effect with the case without the robot.

Ii-B Experimental Setup

Ii-B1 The Environment

The experiment was conducted in the Griffith Building of Stevens Institute of Technology on June 22, 2016. A pedestrian tracking system deployed in the building covers a 10 meter by 4 meter tracking area as shown in Fig. 1 with an exit door at the end of the area. The boundaries of the tracking area are marked with red duct tapes. The tracking system consists of 5 Microsoft Kinect sensors and 6 computers connected to a local area network. A pedestrian detection and tracking software OpenPTrack [14] is used.

OpenPTrack is a real-time distributed people detection and tracking software capable of utilizing RGB-D images from the Kinect network. OpenPTrack runs on Robot Operating System (ROS) in Ubuntu 14.04. During the experiment, each of the 5 Kinect sensors is connected to a computer and the image stream obtained by the Kinect sensor is fed into a OpenPTrack detection program running on the computer. The OpenPTrack detection program on each computer processes the images and transmits the detection results to a central computer over the local area network. The central computer fuses the received data and records the tracking results.

Ii-B2 The Mobile Robot

We used a customized Adept Pioneer P3-DX mobile robot, as shown in Fig. 2.

Fig. 2: The customized Adept Pioneer P3-DX mobile robot.

The robot motion dynamics is described by a single integrator model, that is

(1)

where denotes the robot position and denotes robot velocity control. The robot motion is controlled to follow a pre-designed trajectory that is perpendicular to the pedestrian flow direction. The robot velocity in the direction is controlled to track a sinusoidal signal, and the velocity control in the direction remains . That is,

(2a)
(2b)

where is the amplitude of the simple harmonic motion, which is set to 2m (half the corridor width); is the robot motion frequency. This control law makes sure that the robot moves within the boundary of the area, and the upper limit of the controlled speed is .

Ii-B3 Tracking System Calibration


(a)

(b)

(c)
Fig. 3:

Tracking system calibration: estimation of the coordinate transformation from checkerboard frame to ground frame. (a) Walking Pattern in checkerboard reference frame; (b) Comparison between measurement and ground truth; (c) Measurement error magnitude histogram.


(a) t=s

(b) t=s

(c) t=s

(d) t=s
Fig. 4: Experimental procedure. (a) People start to walk in the bounded area; (b) People walk toward the robot; (c) People interact with the moving robot; (d) People finish one round and walk back to the starting line.

After the sensors are mounted in place, both intrinsic and extrinsic parameters of the tracking system need to be determined. Default intrinsic parameters for the Kinect sensors were used without performing any intrinsic calibration. To calibrate the extrinsic parameters of the tracking system, we printed a 6-by-7 checkerboard with the cell width and height both being 75mm. The networking capabilities of ROS enable the Kinect sensors’ poses to be estimated by: Step (1) letting a pair of sensors both see the checkerboard to calibrate their relative pose; Step (2) finding one calibrated sensor and one uncalibrated sensor, and doing Step (1) with this pair of sensors; Step (3) repeating Step (2) until all sensors are calibrated; Step (4) placing the checkerboard on the ground where it can be seen by at least 1 sensor to specify the checkerboard reference frame; and Step (5) performing calibration refinement with people detections [14].

By performing the 5 steps above, sensor poses are estimated with respect to the checkerboard reference frame defined by the pose of the checkerboard in Step (4), which does not necessarily coincide with the ground reference frame as shown in Fig. 1. Here we define the transformation that maps the homogeneous coordinates of points in checkerboard reference frame to those in ground reference frame as

(3)

where represents translation while represents rotation.

To determine the transformation , the vertices of a unit-distance square grid graph are marked on the tracking area ground. In Fig. 2(b), ground truth positions of the 55 vertices marked with circles in Fig. 2(b) are expressed in checkerboard reference frame as . A person is asked to walk through every marked vertex of the grid, stop at each mark and remain still for a second or two before proceeding to the next one. Fig. 2(a) shows an example pattern of trajectory recorded by the tracking system, starting from and ending at in ground frame coordinates.

Once the trajectory is recorded, the measured positions where the subject stops, or low-velocity points, are obtained by filtering out those points in the trajectory whose velocity is greater than m/s. These low-velocity positions are clustered and the centroid of each cluster is marked with a “” in Fig. 2(b). The low-velocity position centroid expressed in checkerboard reference frame is denoted by . We solve for the transformation by minimizing the sum of squared errors between low-velocity position centroids and their corresponding ground truth positions, that is,

(4)

Ii-C Experiment Procedure

After the tracking system was set up and calibrated, we conducted experiments with 6 cases as listed in Table I. We compare HRI in two testing scenarios with people walking at a normal speed (Cases #1, #2, and #3) and a faster speed (Cases #4, #5, and #6). In each of the two scenarios, we compare pedestrian motions with: 1) no robot, 2) a slow-moving robot, and 3) a fast-moving robot. The mobile robot, whose motion is subject to (1), has a pre-set angular frequency rad/s for the slow-moving cases (Cases #2 and #5), and rad/s in the fast-moving cases (Cases #3 and #6).

In each case, there are 11 participants who have no knowledge of the aim or nature of this experiment. As shown in Table I, in Cases #1 and #4, pedestrians walk without robot intervention, while in the rest of the cases, a mobile robot whose motion is subject to (1) is introduced in the environment. The participants are instructed to walk within the boundaries of the tracking area at two different walking speed, i.e., normal or faster. As shown in Fig. 4, the participants waiting on the left side of the tracking area receives an instruction to walk, and they enter the tracking area. Once they exit the tracking area from the right side of the corridor, they walk back through another pathway next to the tracking area and get ready for the next run. Each case consists of 3 repeated runs. Fig. 4 shows the snapshots of a complete run of Case #3.

TABLE I: Experimental scenarios and parameters

Case # Human Robot # of People Instructed Walking Speed Angular Freq. (rad/s) Start Position 1 11 Normal N/A N/A 2 11 Normal 0.1 (6.6, 0) 3 11 Normal 0.4 (6.6, 0) 4 11 Faster N/A N/A 5 11 Faster 0.1 (6.6, 0) 6 11 Faster 0.4 (6.6, 0)

We present the results from collected data in the next section.

Iii Experimental Results

In this section, we present the observed HRI behaviors, summary statistics for collected HRI, and compare with numerical simulation results reported in our previous work [13].

Iii-a Individual HRI Behaviors


(a) =46.0s

(b) =47.0s

(c) =48.0s

(d) =49.0s
Fig. 5: HRI behavior with normal walking speed of pedestrians (Case #2).

(a) =39.0s

(b) =39.6s

(c) =40.2s

(d) =40.8s
Fig. 6: HRI behavior with fast walking speed of pedestrians (Case #5).
(a)
(b)
(c)
Fig. 7: Single pedestrian and robot interacting trajectories. (a) Case #2, run #2, pedestrian #10, is from s to s; (b) Case #3, run #2, pedestrian #1, is from s to s; (c) Case #6, run #2, pedestrian #7, is from s to s.

We have recorded pedestrian trajectories using the OpenPTrack software and a video recording system. We observe the HRI behaviors when people get close to the robot. It appears that the pedestrian motion behavior changes according to their anticipation of potential collision with the robot, and the density of the area and whether there’s room to move around. We observe that the speed change of the robot affects humans’ motion. We first describe individual HRI behaviors in this subsection, and the HRI effect on collective motion and the average flow velocity is presented in the next subsection.

Fig. 5 shows the temporal sequence of experiment snapshots in Case #2, where the pedestrians walk at normal speed. It can be seen that the pedestrians approach the interacting robot at s and start to avoid the robot at s. At s and s, some pedestrians adjust their walking directions to avoid collision with the robot. Fig. 6 shows the temporal sequence of experiment snapshots in Case #5, where the pedestrians walk at faster speed.

To understand the motion behavior of pedestrians while interacting with the robot, we further investigate the change of trajectory and velocity of individual pedestrians. Fig. 6(a) and Fig. 6(b) show the trajectories of the pedestrians and the interacting robot in Case #2 and Case #3, respectively. The circles and stars represent the position of the pedestrian and the robot at every 1 second. The arrow represents the velocity with the length proportional to the velocity magnitude. From Fig. 6(a), one can see that the pedestrian adjusts his/her walking direction at around m to avoid the robot as he/she foresees potential collision with the robot. Meanwhile, the velocity of the pedestrian changes slightly. One can see from Fig. 6(b) that the pedestrian adjusts his/her walking direction at around m and m respectively, and meanwhile slows down to avoid collision with the robot. Fig. 6(c) shows the trajectories of the pedestrian and the interacting robot in Case #6, from which one can see the pedestrian adjusts his/her walking direction at around m. The results demonstrate that both the trajectory and velocity of the pedestrians can be affected by HRI.

Iii-B Summary Statistics for Collective HRI

We have collected pedestrian trajectory data for all 6 experimental cases listed in Table I. One set of such trajectories is plotted in Fig. 7(a) for 3 runs of Case #2. Each data point on the trajectory has a timestamp. The velocity of each pedestrian is calculated by performing a backward difference on the position signal and then applying a moving average filter to the resulting velocity signal.

To understand the effect of HRI on the collective motion of pedestrians, we plot the average pedestrian velocity profile, which is obtained by averaging all individual pedestrians’ velocities along the axis for each case. Fig. 7(b) shows the testing scenario of normal walking speed of pedestrians (i.e., Cases #1, #2 and #3), and Fig. 7(c) shows the case of the fast walking speed (i.e., Cases #4, #5 and #6). We can see that the average pedestrian velocities are lowered with the presence of robot; and the faster the robot moves, the lower the average pedestrian velocity becomes. This phenomenon is more visible around the robot area (i.e., the vertical dashed line of robot positions shown in the figure) than other areas (e.g., the beginning and the end of the pedestrian trajectories), which indicates an HRI region exists around the robot positions.

To further understand the collective pedestrian motion, we plot the velocity distribution of pedestrians in each case in the box plot of Fig. 9

, where the central rectangle spans the first quartile to the third quartile, the segment inside the rectangle shows the median, and “whiskers” above and below the box show the locations of the minimum and maximum. We can clearly see the trends of HRI, that is, the first quartile, the median, and the third quartile consistently show that: 1) pedestrian velocities are lowered with the presence of robot, and 2) the faster the robot moves, the lower the average pedestrian velocity becomes.


(a)

(b)
(c)
Fig. 8: (a)Trajectories recorded from all 3 runs of Case #2. The trajectory followed by each pedestrian is represented by a solid curve. The vertical dotted line denotes the robot trajectory; Average pedestrian velocities vs. x position for (a) normal walking speed of pedestrians, and (b) faster walking speed of pedestrians.

(a)

(b)

(c)
Fig. 9: Box plot of pedestrian velocities for each case. (a) Normal walking speed; (b) Faster walking speed; (c) Average pedestrian velocity comparison with and without the presence of a moving robot.

Iii-C Comparison with Simulations and Discussions

To compare the experimental results with the numerical simulation results presented in our previous work [13] (Figures 4 and 5 therein), we calculate the pedestrian flow velocity by averaging the velocities over all pedestrians in each case in an HRI region, which is defined the same as in the simulations (i.e., the rectangular regions within 2 meters sideways from the robot path). Fig. 8(c) shows the average pedestrian speed with no-robot and in the presence of a slow- and fast- moving robot in the two cases of normal pedestrian speed and faster pedestrian speed. We can see that the slow-down effect of the pedestrian-robot interaction is more significant for the group of faster-moving pedestrians, comparing with slower-moving pedestrians. In comparison with simulations reported in our previous work [13], we can clearly see that the trends of HRI match. Specifically, the following claims are supported by both simulations and our HRI experiments:

  • The pedestrian speeds are affected by the presence of a moving robot;

  • A moving robot slows down a uni-directional flow. The faster the robot moves, the lower the average pedestrian velocity becomes;

  • The effect of the robot on the pedestrian velocity is more significant when people walk at a faster speed.

For the tested scenarios, it clearly shows qualitative agreement of HRI in an exit corridor in comparison with numerical simulations based on social force models. The results indicate that it is promising to use a moving robot to regulate pedestrian flows, specifically, slowing down the traffic to a desired average speed. Note that in evacuation scenarios, being able to slow down the traffic and avoid the faster-is-slower effect is often desirable.

With more time and resources, we would have collected more experimental data. For example, we may vary the pedestrian density of the area, and compare the HRI effect with different density setups. Nevertheless, the obtained data can be used to tune simulation models for a better match with real world situations, which will be in the scope of our future research. Also, we plan to conduct future HRI experiments to validate learning-based robot-assisted pedestrian regulation schemes that was presented in our early work [13].

Iv Conclusion and Future Work

In this paper, we presented a human-robot interaction experiment in an exit corridor. We investigated the pedestrian behavior when they interact with a robot moving in a direction perpendicular to that of the uni-directional pedestrian flow. Using the experimental data, we not only studied the individual HRI behavior, but also analyzed collective HRI and the effect of robot on the average pedestrian flow. We compared the HRI effect on the pedestrian flow with that obtained from numerical simulations based on the social force models that were reported in our early work [13]. We found that the experimental HRI effect on the collective pedestrian flow is qualitatively consistent with numerical simulations. Future work includes calibrating simulation models using experimentally obtained data, and developing learning-based robot motion planner to regulate the collective speed of the pedestrians to a desired level.

Acknowledgment

The authors would like to thank the undergraduate students, Peter Smith and Randall Devitt, and the graduate student, Muhammad Fahad, for their assistance in collecting data during the human robot experiments.

References

  • [1] T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch, “Human-aware robot navigation: A survey,” Robotics and Autonomous Systems, vol. 61, no. 12, pp. 1726–1743, 2013.
  • [2] M. Luber, L. Spinello, J. Silva, and K. O. Arras, “Socially-aware robot navigation: A learning approach,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 902–907, 2012.
  • [3] D. G. Macharet and D. A. Florencio, “Learning how to increase the chance of human-robot engagement,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2173–2179, 2013.
  • [4] M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun, “Learning motion patterns of people for compliant robot motion,” The International Journal of Robotics Research, vol. 24, no. 1, pp. 31–48, 2005.
  • [5] P. Henry, C. Vollmer, B. Ferris, and D. Fox, “Learning to navigate through crowded environments,” in IEEE International Conference on Robotics and Automation, pp. 981–986, 2010.
  • [6] A. K. Pandey and R. Alami, “A framework for adapting social conventions in a mobile robot motion in human-centered environment,” in International Conference on Advanced Robotics, pp. 1–8, 2009.
  • [7]

    B. Kim and J. Pineau, “Socially adaptive path planning in human environments using inverse reinforcement learning,”

    International Journal of Social Robotics, vol. 8, no. 1, pp. 51–66, 2016.
  • [8] A. Garrell and A. Sanfeliu, “Local optimization of cooperative robot movements for guiding and regrouping people in a guiding mission,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3294–3299, 2010.
  • [9] G. Ferrer, A. Garrell, and A. Sanfeliu, “Robot companion: A social-force based approach with human awareness-navigation in crowded environments,” in IEEE/RSJ International Conference on Intelligent robots and systems, pp. 1688–1694, 2013.
  • [10] J. A. Kirkland and A. A. Maciejewski, “A simulation of attempts to influence crowd dynamics,” in IEEE International Conference on Systems, Man and Cybernetics, vol. 5, pp. 4328–4333, 2003.
  • [11]

    B. D. Eldridge and A. A. Maciejewski, “Using genetic algorithms to optimize social robot behavior for improved pedestrian flow,” in

    IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 524–529, 2005.
  • [12] K. Yamamoto and M. Okada, “Control of swarm behavior in crossing pedestrians based on temporal/spatial frequencies,” Robotics and Autonomous Systems, vol. 61, no. 9, pp. 1036–1048, 2013.
  • [13] C. Jiang, Z. Ni, Y. Guo, and H. He, “Robot-assisted pedestrian regulation in an exit corridor,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 815–822, 2016.
  • [14] M. Munaro, F. Basso, and E. Menegatti, “Openptrack: Open source multi-camera calibration and people tracking for RGB-D camera networks,” Robotics and Autonomous Systems, vol. 75, pp. 525–538, 2016.