Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces

06/16/2020 ∙ by Walter Morales Alvarez, et al. ∙ Johannes Kepler University Linz Universidad Carlos III de Madrid 0

Completely unmanned autonomous vehicles have been anticipated for a while. Initially, these are expected to drive only under certain conditions on some roads, and advanced functionality is required to cope with the ever-increasing challenges of safety. To enhance the public's perception of road safety and trust in new vehicular technologies, we investigate in this paper the effect of several interaction paradigms with vulnerable road users by developing and applying algorithms for the automatic analysis of pedestrian body language. We assess behavioral patterns and determine the impact of the coexistence of AVs and other road users on general road safety in a shared space for VRUs and vehicles. Results showed that the implementation of visual communication cues for interacting with VRUs is not necessarily required for a shared space in which informal traffic rules apply.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The arrival of driverless vehicles has been anticipated already for some time. Several aspects regarding their convenience and safety have been addressed, highlighting for example that the replacement of the human driver by automation will lead to more efficient driving patterns that result in the environmental benefits of decreased traffic congestion, as well as public safety improvements due to fewer traffic-related injuries [olaverri2016autonomous]. Many vehicles are already equipped with the technology that enable self-driving automation, such as lane-keeping assistance and automated braking. In the near future highly autonomous, complex dynamical systems will be mature enough to implement intelligent autonomous vehicles (AV).

Competition has been generated between the different automotive industries to develop their autonomous vehicles at ever higher levels of automation and to commercialize them. BMW showed an autonomous concept car at CES 2016 and announced its initiative to include the automation of their vehicles as part of their iNEXT project [1]. General Motors launched in 2018 the Cadillac CT6, its first autonomous vehicle level 2, which possesses a hands-free driving system called Super Cruise [2]. At the same time, Renault developed its own autonomous concept vehicle, the EZ-GO presented at the Geneva Motor Show in 2018 [3]. They also announced that in 2020 they would present a fleet of vehicles equipped with a significant amount of autonomous functionality. Finally, Nissan introduced its Serena model, boasting a one-lane autonomous driving system called proPILOT and which is being featured in Japan [4]. Despite the multiple advances that have taken place all over the world that bring society closer to self-driving vehicles, there is still not a car on the road that is completely autonomous. Most are concepts and prototypes of fully automated vehicles that drive in controlled environments (e.g.: universities) under certain conditions and on predetermined roads.

Road safety is not only determined by the technology of the autonomous vehicles themselves, but rather a significant aspect of safety lies in the interaction between the automated vehicles and the other vulnerable road users (VRUs).

Therefore it is crucial to assess patterns regarding complexity and risk by judging and anticipating the actions of the different actors in the system to determine the rules for their co-existence. In this context perceived trustworthiness of new vehicular technologies will be jeopardized if other users are not able to determine the authenticity of the information provided by the autonomous vehicle [5].

In this paper we aim at increasing road safety through approaches that augment awareness of the surrounding environment for road users and the automation. We focus on VRUs, as they cannot make visual contact with a driver in a driverless vehicle and they must therefore turn to novel or unfamiliar ways of understanding the decisions made by the vehicle, if they are able to do so at all.

To address this, we investigate interaction strategies by applying in field tests the algorithms for the automatic analysis of pedestrian body language presented in [6] to determine the impact of the coexistence of AVs and other road users on general road safety. To this end we focus on crossing behavior that is relatively close and directly in front of the AV, as it is relevant for safety and proves the pedestrians trust the technology.
We therefore define the following research question:

are pedestrians more likely (unnecessarily) to pause or stop and yield the roadway to a driverless vehicle when the vehicle did not signal or indicate to pedestrians that they had been seen? and form the following null hypothesis:


H0: There is no relationship between measured pedestrian crossing behavior and driverless vehicle communication signals.
We performed the field tests in shared spaces in which a traditional safety infrastructure to guide VRU does not exist so that everyone is forced to become more alert and ultimately more cooperative [7]. This scenario is applicable for example in the “last mile” with automatic delivery robots.

The remainder of the paper is organized as follows: next section describes related work in the field; section III details the description of the field test. Section IV describes the modules that acquire the pedestrian’s data which allows a quantitative study of the interaction with the autonomous vehicles; section V presents the method to assess the data collected; section VI presents the obtained results; and, finally, section VII discusses and concludes the work.

Ii Related Work

As previously mentioned, communication protocols that make the interaction of driverless vehicles and VRUs possible are necessary to foster trust in the automation [8]. A lot of literature has been dedicated to the study and interpretation of pedestrian behavior as they interact with vehicles. For example, the authors in [9] identified the parameters that affect pedestrians at crosswalks in order to predict their intentions, addressing this issue from two distinct perspectives, the pedestrian’s and driver’s.

Within this study the authors concluded that interactions were based on a given vehicle’s distance rather than Time To Collision (TTC), corroborating the results presented in [10]. The results from Hamaoka [11] showed that there are several physical locations on the street where pedestrians seek to confirm the proximity of a vehicle to ensure their safety. An indicator of this was the frequency of head turning being higher at the edges and middle of crosswalks. Although the previous studies quantitatively established the main parameters governing pedestrian decisions when crossing, they were based on conventional traffic situations with manned vehicles.

In recent studies focusing on the interaction between autonomous vehicles and pedestrians, it was shown that people felt more comfortable crossing the street when a form of response from the side of the vehicle was presented (e.g eye contact) similar to the interaction that occurs with manual driven vehicles [12] and [13]. In the same line of research a survey by the League of American Bicyclists concluded that the inability of pedestrians and cyclists to communicate and make eye contact with a driverless vehicle increased perceived risk [14].

The authors in [15] measured the importance of using communication interfaces between the pedestrian and the autonomous vehicle. For their purposes they used a remote-controlled golf cart with an LED word display that explicitly indicated when pedestrians should cross the road in front of them, and they developed a simulator to test human behavior in this particular situation/setting. Basing their results on a qualitative data collection method, they showed that trust in the technology is dependent on prior knowledge about AV and the distance between both pedestrian and vehicle.

Further, different early-stage display concepts for interfaces were evaluated by means of crowdsourcing in [16] and more advanced interfaces and communication protocols have been tested in [17] by using, for example, images that follow pedestrians [18] or implicit forms of communication that included vehicle motion patterns such as breaking [19].
Important groundwork for our line of research was laid in [20], which identified factors that potentially influenced the perception of a road situation as safe in an environment in which vehicles operated with full driving automation (level 5) in a public space. The analysis of recorded videos and subjective qualitative data established that there were several levels of trust, uncertainty and a certain degree of fear among participants. However, the existence of a communication system to support the interaction with the driverless vehicles was evaluated as positive.

Although previous studies showed that adding an external monitor or screen to an autonomous vehicle helped VRU to gather relevant information to properly identify the road situation and make the right choices [20], there are studies such as [21] that showed that the patterns in the previous studies are not decisive in defining a behavior in pedestrians. Moreover, in [22] and [23] the authors concluded that people’s reactions and behavior are determined in greater part by the distance and speed of the vehicle than on the interface presented by the vehicles [24].

All the previous studies focused on defining the main factors that influenced pedestrian crossing behavior. However, most part of them relied on qualitative data and in the studies that were based on quantitative data, a Wizard of Oz or OZ paradigm was used to mimic the behavior of the intelligent vehicle. We contribute to the state of the art by presenting in this work quantitative data using an unmanned vehicle in a shared space as explained in the next section.

Iii Field Test Description

In order to obtain behavioral patterns of different individuals in the environment, we applied the algorithms described in [6] and identified poses adopted by pedestrians in the urban environment when they were exposed to the presence of a driving AV. The environment consisted of a shared space in which segregation of VRUs and vehicles was minimized. In such a scenario, traffic relies more on the informal rules of foot traffic.

The selected scenario was the campus of the University Carlos III in Madrid. The campus contains several green spaces that are connected to the village of Leganés and consequently the pedestrians were residents of the area as well as students from the university. In this scenario the iCab autonomous vehicle (see [25]) passed multiple times along a predefined route of 30 meters through a perpendicular flow of pedestrians, creating many opportunities for them to cross in front of the AV. However, the flow of pedestrians could move in multiple directions, such that crossing in front of the AV was not absolutely essential.

The vehicle was equipped with a external Human Machine Interface (HMI) that conveyed several messages to the pedestrians to indicate whether they had been detected (see Figure 1). During the experiment, vehicle sensors acquired the corresponding data necessary to analyze pedestrian behavior.

(a)
(b)
Fig. 1: (a) Autonomous vehicle displaying closed eyes to indicate that VRU have not been detected. (b) Open eyes on the display indicating detection of VRU. The image displayed by the AV is depicted on the figure’s lower right corner.

The experiments were conducted for two days where pedestrians were continuously exposed to the iCab and the data was recorded for further processing and analysis. From these tests, material of 36 videos was obtained with 135 pedestrians interacting with the vehicle. In order to mimic real road conditions as much as possible, pedestrians were not aware of the data collection. To ensure the safety of the pedestrians in case of a failure, a remote control in a fixed location off-site made it possible to stop the vehicle in an emergency situation.

The following parameters were set to determine road safety as well as response to displayed messages:

  • distance between pedestrians and AV

  • vehicle’s speed along its path

  • head and body pose

Using all of the above data we could obtain pedestrian behavioral patterns indicated by their pose and distance to the AV, as well as road safety-related information such as the TTC (calculated using vehicle speed and pedestrian coordinates). This information was analyzed according to the corresponding image that was displayed on the vehicle interface.

The design of the HMI relied on the description in [20]. It was developed in C++ and integrated with the Robot Operating System (ROS) into the vehicle’s operating system. The vehicle detected pedestrians in the proximity, taking into account the degree of rotation of the vehicle, and then activated different displays depending on whether it had detected the pedestrian or not. The algorithms were trained to analyze eye contact, facial expression, and head pose to determine the crossing behavior depending on the message conveyed. To this end the following experiments were performed:

Iii-a Baseline Condition

A performance baseline in which no message was displayed was established to quantify changes in pedestrian behavior.

Iii-B Red-Green Sign

Inspired by traditional traffic light color-coding, a red screen indicated to pedestrians that it was not safe to cross and a green one signalled that crossing was safe.

Iii-C Open-Closed Eyes

An additional set of images mimicked driver behavior as a strategy to ensure that the VRU understood the decisions made by the vehicle. The display showed a pair of open eyes indicating that the pedestrian had been detected and could cross, or a pair of closed eyes indicating that the vehicle had not noticed the pedestrian.

Iv Algorithms Implementation

Iv-a Pose Identification

Relying on the approach presented in [6]

, the specific pose of a pedestrian was identified using the OpenPose open source library developed by CMU-Panoptic labs

[26],[27], [28]

, which designed and trained a feedback convolutional neural network that determined key points of individual poses in an RGB image and rendered the poses as seen in Figure 

2.
The neural network is in charge of calculating the heatmaps where the keypoints of the pose are most likely to be found, and it connects them using the PAFs feature that preserves the location and orientation of people’s joints. Using the cameras presented in the vehicle and implementing the OpenPose library, it is possible to obtain up to 25 pedestrian pose keypoints and 26 facial position keypoints to determine behavioral patterns.

Fig. 2: Pose (color points) and face (white points) calculated by OpenPose library.

Iv-B Distance Estimation

A further crucial parameter for estimating road safety is the pedestrian’s distance from the approaching vehicle at the time of crossing. Thanks to a 2D laser that is integrated in the autonomous vehicle, it is possible to acquire the distances between the pedestrians and the AV. As in the previous modules, the acquisition of laser’s data is obtained through ROS publishing an acquisition node in a certain topic (

) the distances in meters of the nearby objects. The data is obtained as a series of points that can be observed using the RViz visualization package as shown in Figure 3. Using the RVIZ tools and analyzing the points corresponding to pedestrians, it is possible to determine their distance at the time of interaction with the vehicle.

Fig. 3: Left: Image acquired by the stereoscopic camera on the autonomous vehicle. The reference axis perpendicular to the picture plane corresponds to the red axis in the image on the right. Right: Representation of VRUs as a series of distance points on the RViz ROS visualization widget. The reference axes are located in the center of the image, whose plane is parallel to the vehicle’s plane of movement.

Iv-C Velocity

The autonomous vehicle was equipped with wheel optical encoders with which the speed of the vehicle could be obtained, taking into account the physical dimensions of the automobile’s wheels. The ROS package installed in the AV made it possible to publish the speed of the vehicle at all times through the pertinent topic (e.g. ).

V Data Acquisition and Analysis for Pedestrian Behavior

To test the defined hypothesis we acquired the required data through the algorithms described in section IV as follows:

Head and body poses to determine behavioral patterns depending on the message displayed were identified by the autonomous vehicle. As described in section III the experiment conditions were as follows: baseline, red-green sign or open-closed eyes. Two categories were created based on these data:

  1. VRU that saw the message inside the car and changed their behavior (e.g. stopped for a moment).

  2. VRU that saw the message and continued without any change (e.g. kept walking).

To test the relationships between the categorical variables we performed a Pearson

test. Further, we determined the distance between the iCab and the pedestrians. Based on this distance we also calculated the TTC based on the velocity of the vehicle. Statistical significance of the relationship was tested through a unpaired t-test.

The data corresponding to the persons who did not see the vehicle was additionally analyzed. As it is known, eye contact plays a critical role at unmarked intersections, as integrating glances facilitates cooperative action while avoiding eye contact is a way of dominating the other in an interaction [29] cited in [30]. Finally, as in [31]

, a power analysis was performed to measure the effectiveness of the t-test of rejecting the null hypothesis by calculating the probability of not committing an error of type II (1-

) or, in other words, the probability of falsely rejecting the null hypothesis.

Vi Results

From the extracted information we could derive that 92 pedestrians (68.14%) looked at the screen that was displaying the images and 43 (31.86%) didn’t even look at the vehicle.

From the pedestrians that looked at the screen it could be observed a greater percentage of pedestrians that crossed in front of the AV, independently of the message displayed. From the results presented in Table I the distributions of the categorical variables differed from one another being the differences in the proportion of pedestrians who walked or stopped when the screen showed red or closed eyes not statistically significant. Therefore, we fail to reject the null hypothesis.

Baseline Green color Open eyes Red color Closed eyes
Walking 17 9 11 25 21
Stand 3 2 1 1 2
test ( =0.05)
Baseline vs. Green color Baseline vs. Open eyes Baseline vs. Red color
(1,N=31) p (1,N=32) p (1,N=46) p
1.99 0.158 0.49 0.484 1.77 0.183
Green color vs. Open eyes Green color vs. Red color Red color vs. Closed eyes
(1,N=23) p (1,N=27) p (1,N=49) p
0.49 0.484 2.13 0.144 0.50 0.480
Open eyes vs. Red color Open eyes vs. Closed eyes Baseline vs. Closed eyes Green color vs. Closed eyes
(1,N=38) p (1,N=35) p (1,N=43) p (1,N=34) p
0.33 0.57 0.01 0.974 0.41 0.522 0.65 0.420
TABLE I: Pedestrian behavior depending on the system display condition

As for the data related to the distance and TTC, results from the power analysis for independent samples from the t-test ranged from 73% to 97%, indicating that there is a low probability of having a type II error and erroneously accepting the null hypothesis testing the parameters. Table 

II shows the obtained values. The analysis indicated that the distance to the vehicle in the moment of crossing was lower under the baseline condition. It also shows that the TTC was lower when the red/green color- coded message and the message showing open/closed eyes was displayed. However these values did not differ significantly between participants.

Metric Baseline Red/green color Opened/closed eyes
Mean SD Mean SD Mean SD
Distance(m) 6.14 3.56 7.38 3.48 6.88 2.76
TTC (s) 7.31 4.64 4.9 6.23 5.10 7.91
T-Test ( =0.05)
Metric Baseline vs. Red/green color Baseline vs. Opened/Closed eyes Red/green color vs. Opened/Closed eyes
t(92) p t(92) p t(92) p
Distance(m) 1.27 0.20 0.85 0.39 0.67 0.55
TTC (s) 1.51 0.13 1.07 0.28 0.90 0.12
TABLE II: Pedestrian distance to the vehicle as well as TTC while crossing depending on the kind of display showed

Regarding the persons who did not see the vehicle, results determined by TTC and distance to the vehicle showed that the effect on road safety of the lack of eye contact at unmarked intersections was not significant (Table III).

T-test (=0.05)
Metric Without eye contact Eye contact T-Test()
Mean SD Mean SD t(133) p
Distance (m) 6.93 3.28 7.81 3.56 1.41 0.1599
TTC (m/s) 5.87 6.71 8.93 12.22 1.37 0.1726
TABLE III: Effect of eye contact on interaction with the AV

Finally, Figure 4 depicts the number of pedestrians that had seen the vehicle and crossed in front of the AV considering their distance to the vehicle, as well as the TTC in relation to the kind of display showed. From this graphic we can see that 69 pedestrians (71.7%) crossed at a distance between 5 and 9 meters. The TTC range being 2 to 8 seconds.

(a)
(b)
Fig. 4: Pedestrian distance to the vehicle (a) and TTC (b) while crossing depending on display conditions.

Vii Conclusion, Discussion and Future Work

Initially, it is necessary to note that the fact that looking at the vehicle automatically guaranteed recognition of the display and images on it was confirmed by pedestrian participant comments such as “the car is looking at you”. This is important because the study is based on the images that the vehicle showed to pedestrians.

The results reported in this paper did not show statistically significant differences in the proportion of pedestrians who continued walking and crossed in front of the AV to those who stopped depending on the display. Therefore, we fail to reject the null hypothesis.

Moreover, it was observed in most cases that pedestrians crossed even when the message was red or displayed closed eyes. Apparently, the detection of the vehicle on the part of the pedestrians was sufficient to make the decision to cross or not. Therefore, the research question formulated in the beginning: “Pedestrians are more likely to pause and refrain from crossing in front of an AV” could not be confirmed. Furthermore, the kind of display did not affect the distance at which pedestrians crossed in front of the AV and the TTC. This was probably because the vehicle was slow, never exceeding 5 , as people are less likely to respond to a low-speed moving AV, which is not dangerous to them. As described in the section II previous works have shown the importance of the vehicle movement (e.g speed, distance) for pedestrians. However they based on simulations or subjective data, while this work describes the quantitative results of a field test performed with a driverless vehicle.

The relationship between the absence or presence of eye contact on parameters related to road safety such as distance and TTC was not significant. Therefore, it could not be confirmed or disconfirmed whether eye contact with an AV in the tested shared space scenario, in which traffic relies more on the informal rules of foot traffic without traffic lights, road markings or signs that indicate the right-of-way, facilitated cooperative action.
During the experiment, it could be observed that in most cases people were distracted, using a cell phone or conversing. For safety reasons, in these cases the AV stopped, causing the pedestrian’s curiosity. Interestingly, a high number of pedestrians were first aware of the vehicle only when it stopped.
We can conclude that from the results obtained in section VI, the implementation of visual communication cues for interacting with VRUs is not necessarily required for a shared space in which informal traffic rules apply. They are more likely to help when vehicle and pedestrian have potential conflicts that cause danger. These results are in line with the findings in [21], [22] and [23] that stated that information showed on external monitors was not determinant to define a behavior in pedestrians being distance and speed of the vehicle more decisive [24].

Therefore, and in line with the finding in [20], future work will focus on other communication signs such as auditory cues. We will also use additional sensors for cataloging pedestrian behavior that rely on the reconstruction of 3D points to determine, for example, the number of pedestrians who crossed behind the vehicle, as the camera and laser used were only able to record situations that occurred in front of the vehicle.

Acknowledgment

This work was supported by the Austrian Ministry for Transport, Innovation and Technology (BMVIT) Endowed Professorship for Sustainable Transport Logistics 4.0.

References

  • [1] The BMW Vision iNext. Future Focused. [Online]. Available: https://www.bmwgroup.com/BMW-Vision-iNEXT
  • [2] C. B. I. (Firm). 46 Corporations Working On Autonomous Vehicles. New York. [Online]. Available: https://www.cbinsights.com/research/autonomous-driverless-vehicles-corporations-list/
  • [3] Renault EZ-GO reveal at the 2018 Geneva Motor Show. [Online]. Available: https://en.media.groupe.renault.com/
  • [4] ProPILOT. [Online]. Available: https://www.nissan-global.com/en/technology/overview/propilot.html
  • [5] A. Allamehzadeh and C. Olaverri-Monreal, “Automatic and manual driving paradigms: Cost-efficient mobile application for the assessment of driver inattentiveness and detection of road conditions,” in 2016 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2016, pp. 26–31.
  • [6] W. Morales-Álvarez, M. J. Gómez-Silva, G. Fernández-López, F. García-Fernández, and C. Olaverri-Monreal, “Automatic Analysis of Pedestrian’s Body Language in the Interaction with Autonomous Vehicles,” IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2018-June, no. Iv, pp. 1–6, 2018.
  • [7] E. Jaffe, “6 Places where cars bikes and pedestrians all share the road as equals,” 2015. [Online]. Available: shorturl.at/mMRW2
  • [8] A. Hussein, F. Garcia, J. M. Armingol, and C. Olaverri-Monreal, “P2V and V2P communication for Pedestrian warning on the basis of Autonomous Vehicles,” in IEEE International Conference on Intelligent Transportation Systems (ITSC2016).   IEEE, 2016, pp. 2034–2039.
  • [9] S. Schmidt and B. Färber, “Pedestrians at the kerb – Recognising the action intentions of humans,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 12, no. 4, pp. 300–310, jul 2009. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1369847809000102
  • [10] J. A. Oxley, E. Ihsen, B. N. Fildes, J. L. Charlton, and R. H. Day, “Crossing roads safely: An experimental study of age differences in gap selection by pedestrians,” Accident Analysis & Prevention, vol. 37, no. 5, pp. 962–971, sep 2005. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0001457505000795
  • [11] H. Hamaoka, T. Hagiwara, M. Tada, and K. Munehiro, “A Study on the behavior of pedestrians when confirming approach of right/left-turning vehicle while crossing a crosswalk,” IEEE Intelligent Vehicles Symposium, Proceedings, vol. 10, no. 2011, pp. 99–103, 2013.
  • [12] T. Lagström and V. M. Lundgren, “AVIP-Autonomous vehiclesínteraction with pedestrians,” Ph.D. dissertation, Chalmers University of Technology.
  • [13] S. Yang, “Driver behavior impact on pedestrians’ crossing experience in the conditionally autonomous driving context,” 2017. [Online]. Available: http://kth.diva-portal.org/smash/record.jsf?pid=diva2{%}3A1169360{&}dswid=6775
  • [14] League of American Byciclist, “Autonomous and Connected Vehicles: Implications for Bicyclists and Pedestrians.”
  • [15] M. Matthews, G. V. Chowdhary, and E. Kieson, “Intent Communication between Autonomous Vehicles and Pedestrians,” Tech. Rep. [Online]. Available: https://arxiv.org/pdf/1708.07123.pdf
  • [16] L. Fridman, B. Mehler, L. Xia, Y. Yang, L. Y. Facusse, and B. Reimer, “To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays,” arXiv preprint arXiv:1707.02698, 2017.
  • [17] K. Mahadevan, S. Somanath, and E. Sharlin, “Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18.   New York, New York, USA: ACM Press, 2018, pp. 1–12. [Online]. Available: http://dl.acm.org/citation.cfm?doid=3173574.3174003
  • [18] C.-M. Chang, K. Toda, D. Sakamoto, and T. Igarashi, “Eyes on a Car: an Interface Design for Communication between an Autonomous Car and a Pedestrian,” 2017. [Online]. Available: https://doi.org/10.1145/3122986.3122989
  • [19] M. Beggiato, C. Witzlack, S. Springer, and J. Krems, “The Right Moment for Braking as Informal Communication Signal Between Automated Vehicles and Pedestrians in Crossing Situations,” 2018, pp. 1072–1081. [Online]. Available: http://link.springer.com/10.1007/978-3-319-60441-1{_}101
  • [20] M. De Miguel, D. Fuchshuber, A. Hussein, and C. Olaverri-Monreal, “Perceived Pedestrian Safety: Public Interaction with Driverless Vehicles,” in 2019 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2019, pp. 1–6.
  • [21] D. Rothenbucher, J. Li, D. Sirkin, B. Mok, and W. Ju, “Ghost driver: A field study investigating the interaction between pedestrians and driverless vehicles,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).   IEEE, aug 2016, pp. 795–802. [Online]. Available: http://ieeexplore.ieee.org/document/7745210/
  • [22] M. Clamann, M. Aubert, and M. L. Cummings, “Evaluation of vehicle-to-pedestrian communication displays for autonomous vehicles,” 2017. [Online]. Available: https://trid.trb.org/view.aspx?id=1437891
  • [23] A. Pillai, “School of Science Master’s Programme in ICT Innovation Virtual Reality based Study to Analyse Pedestrian Attitude towards Autonomous Vehicles Virtual Reality based Study to Analyse Pedestrian attitude towards Autonomous Vehicles,” 2017.
  • [24] A. Rasouli and J. K. Tsotsos, “Autonomous Vehicles That Interact With Pedestrians: A Survey of Theory and Practice,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–19, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8667866/
  • [25] D. Gomez, P. Marín Plaza, A. Hussein, A. de la Escalera, and J. M. Armingol, “ROS-based Architecture for Autonomous Intelligent Campus Automobile (iCab),” UNED Plasencia Revista de Investigación Universitaria, vol. 12, pp. 257–272, 2016.
  • [26] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields,” in CVPR, 2017.
  • [27] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,” in arXiv preprint arXiv:1812.08008.
  • [28] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, “Convolutional pose machines,” in CVPR, 2016.
  • [29] T. C. Schelling, Choice and consequence.   Harvard University Press, 1984.
  • [30] T. Vanderbilt, Traffic Why we drive the way we do (and what it says about us).   Vintage, 2009.
  • [31] C. Olaverri-Monreal, M. Gvozdic, and B. Muthurajan, “Effect on driving performance of two visualization paradigms for rear-end collision avoidance,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).   IEEE, Oct 2017, pp. 77–82. [Online]. Available: http://ieeexplore.ieee.org/document/8317937/