Predicting Driver Self-Reported Stress by Analyzing the Road Scene

09/27/2021 ∙ by Cristina Bustos, et al. ∙ MIT Open University of Catalonia 2

Several studies have shown the relevance of biosignals in driver stress recognition. In this work, we examine something important that has been less frequently explored: We develop methods to test if the visual driving scene can be used to estimate a drivers' subjective stress levels. For this purpose, we use the AffectiveROAD video recordings and their corresponding stress labels, a continuous human-driver-provided stress metric. We use the common class discretization for stress, dividing its continuous values into three classes: low, medium, and high. We design and evaluate three computer vision modeling approaches to classify the driver's stress levels: (1) object presence features, where features are computed using automatic scene segmentation; (2) end-to-end image classification; and (3) end-to-end video classification. All three approaches show promising results, suggesting that it is possible to approximate the drivers' subjective stress from the information found in the visual scene. We observe that the video classification, which processes the temporal information integrated with the visual information, obtains the highest accuracy of 0.72, compared to a random baseline accuracy of 0.33 when tested on a set of nine drivers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Understanding how the driving scene impacts the driver’s emotional state has found a growing interest in the field of driver-assistance technologies. External driving conditions influence the driver’s affective state [23, 21, 25, 18, 14], impacting both road safety [18, 19] and driver experience [24]. While some specific case studies show that the drivers’ stress correlates to road traffic conditions [13, 16, 2, 7] and road type (city, highway, and parking) [12, 22, 10, 2], automatically inferring these, possibly causal, relations directly from the driving scene has not yet been explored in-depth.

In this paper, we study how the drivers’ subjective stress level can be estimated from images displaying the driving scene. Our study takes inspiration from Healey and Picard’s original study on driver stress [12], which constructed low, medium and high stress conditions corresponding respectively to sitting in a parked car with eyes closed, driving on a highway under optimal conditions (dry pavement, no traffic or construction, and good weather), and city driving in a busy area. The authors also measured the complexity of events minute-by-minute under each condition (e.g. a turn, a pothole, or a pedestrian are example events that increased the complexity). Measurements of stress-related physiology and self-reported stress agreed with the low, medium, and high-stress levels as well as correlated with the human-rated assessment of the scene complexity. Based on these observations and recent research [9], our hypothesis is that some objects and events that are visible in the driving scene are informative enough to approximate the current drivers’ subjective stress level.

More concretely, the goal is to empirically test with different supervised machine learning approaches, whether the drivers’ subjective stress can be inferred solely from real driving scenes extracted from the AffectiveROAD dataset

[1]. The dataset contains a collection of real driving videos taken by a camera frontally pointing to the road labelled with the drivers “stress” signals.

The provided “stress” signals were constructed in real-time by an observer who sat in the rear seat and annotated the driving scene “complexity”, and validated post-experience by the driver [10]. Fig. 1.a illustrates the data used in our study, while Fig. 1.b illustrates the inference of drivers’ subjective stress level on new unseen driving scene videos.

Fig. 1: (a) Illustration of the data used in our study and (b) the prediction of drivers’ subjective stress level on new unseen driving scene videos.

The three modelling approaches differ in complexity and technique. First, we used classical machine learning (Random Forest and Support Vector Machines) on

handcrafted features encoding the presence of common objects, such as cars, road, traffic signals, or pedestrians. The second and third modelling approaches are based on two end-to-end Deep Convolutional Networks: an image Convolutional Neural Network (CNN) and a video CNN.

We examine the three modelling approaches and we find all perform significantly above chance on the tested task. As expected, the best accuracies are obtained by the video CNN. For the video CNN we also use a CNN explainability technique (GradCam [34], described in Sect.IV-D) to visualize the areas of the input frames that contribute the most to the output of the model. We find that the model puts attention on certain objects and specific configurations of the scene that align with our work hypothesis.

Our study shows promising results on using the driver scene to approximate drivers’ stress. Such findings may have practical applications: from intelligent cars able to better assist drivers under stress, to suggesting less stressful routes and helping improve driver safety.

Ii Related Work

Detection of driver affective state is relevant for the development of in-vehicle systems that can help improve driving experience [38]. Stress can have a significant negative impact on driving performance, causing traffic violations and crashes [35, 3]. Various scenarios and interventions have been designed to alleviate the “extreme affective states” when detected [26, 15, 30]. Based on the driver’s state taxonomy of Braun et al. [4], these extreme states correspond to dangerous states, while states with medium arousal levels and positive valence are recognized as optimal ones.

Different approaches have been used to detect driver’s stress and affective states, including physiology, facial expression, self-reports or biosignals [31, 38, 27], but most have focused only on sensing momentary changing signals from the driver. Including contextual parameters, whether internal or environmental [31], is expected to help improve accuracy. Internal context parameters may include personal parameters such as driver mood or personality [19, 24], while environmental parameters typically characterize external factors such as the weather [16, 33], road traffic [13, 16, 2, 7], and road type [12, 22, 10, 2]. Our work in this paper focuses on vision-based extraction of environmental context.

Urban settings tend to have higher complexity [6] that may require a higher level of attention which, in turn, requires higher cognitive workload, thus usually increasing driver arousal and stress. Features that represent this complexity, directly extracted from the visual scene, may be used to characterize such external conditions. Thus, tools for scene analysis become increasingly important to assess the driver’s affective state. Thanks to recent advances in machine learning [32, 39] with shared real-world datasets [11, 41, 8], we believe it is now feasible to train an automated system to predict a driving-induced state of stress from a visual scene.

The definition and annotation of driver emotion remains challenging in real-world driving settings. According to a recent survey by Zepf et al. [38], reliable annotations are important to effectively recognize the emotional state of drivers. The authors identified three main approaches to such annotations based on their survey: self-reports, external annotators, and experimental context. Self reports require the involvement of the participants by usually asking them to report their driving experience (after the drive) according to questionnaires such as: Positive and Negative Affect Scale [20] or Self-Assessment Manikin [17]. This approach is subjective and might induce some biases. The annotation approach based on external annotators can be more reliable; however, it requires time, effort, and extra cost to find, train, and compensate experienced observers.

If it were possible to automatically identify experimental contexts that are reliably associated with stress levels, then it could provide a new method to annotate a driver’s likely state. Thus, our work may help not only with predicting driver stress in real-time applications, but also it may help in expanding the utility of other unlabeled data sets for additional research.

Iii Data

The research we develop in this paper is based on the AffectiveROAD dataset [1] which includes gold-standard stress annotations provided by drivers after real-world open-road driving experiences. We describe below the acquisition protocol and the details of the dataset.

Iii-a AffectiveROAD data description

The AffectiveROAD protocol aims to jointly collect, in real driving scenarios, information about the driving environment (in and outside the car), driver’s physiological state (for example, electrodermal activity, breathing rate and heart rate) and the driver’s contextual “stress” levels.

The protocol is designed to obtain information about typical daily trips in Grand Tunis (with normal traffic conditions) of km in length that take about minutes (including minutes rest). Driving routes (see Fig. 2) are chosen so that drivers alternate between different road types and environments, which may induce different stress levels.

During each drive, an experimenter, who sat in the back seat, annotated in real-time the perceived stress using a laptop-based slider. This resulted in a subjective score[10], ranging from (not stressful) to (extremely stressful), based on the perception of the driving scene “complexity” and the stress level of the driver. See Fig. 3 for an example of the evolution of the subjective score during the driving experiment. The drivers were asked to validate or correct the score after the experiment. Synchronized videos captured both the inside and outside car environments (visualized side by side) and these were synchronized with the stress score and were shown to each driver. These videos and stress scores were embedded in a platform that offered the user the option to change the stress metric values at any point of the experiment. The resulting stress metric will be denoted as the ”human-driver-provided stress” in this paper.

The complete dataset provides data for paths completed by 9 drivers in sunny days. Six of the paths were performed by drivers who completed the driving trajectory only once, and seven paths were accomplished by three drivers who repeated the experiments on different days. The complete dataset related to the paths contains minutes of video recordings.

Fig. 2: Route proposed to the AffectiveROAD study participants.

Iii-B Data subset used in this study

Although the AffectiveROAD dataset contains various modalities, the work in this paper uses only the human-driver-provided stress metric and the corresponding scene road video recordings, because it provides cleaner labels on the driver’s stress than the physiological signals.

The stress metric was obtained as follows.

In Fig. 4, the left plot shows the histogram of the original stress measures provided by the drivers. Note that the stress variability of the original measures was high for all the drives. It ranges between and almost . We then min-max normalized the stress metric within each driver, so each person’s labels ranged from a minimum of to a maximum of (original min and max value were very close to 0 and 1, respectively, for all the drivers). Based on the 3-modal metric distribution shown in Fig 4 and after consulting the experimenter who generated the continuous metric [10], we constructed three discrete stress classes as follows: low stress class, for stress scores between and ; medium stress class, for scores between and ; and high stress class for scores higher than . In Fig. 4, the right plot shows the number of instances per each of the three stress classes.

Fig. 3: Example of human-driver-provided stress measure for driver 1. Background color in the plot indicates the part of the road map.

Fig. 4: Dataset Distribution. Left: frequency distribution histogram of continuous human-driver-provided stress. Right: histogram of discretized human-driver-provided stress.

For illustrative purposes, Fig. 5 shows randomly selected examples of video sequences for the defined low, medium, and high stress categories. Notice that we can already observe some visual differences in these examples. For the low stress category we do not see much traffic, and the video sequences correspond to scenes where the speed is slow. Based on this extracted sample of video sequences, the low stress category includes segments from the Z zone and highway as defined in Fig. 2. For the medium stress category, we observe the road scene corresponding to areas where one can circulate at a higher speed without close vehicles. The frame sequences depicted in the medium section of Fig. 5 correspond mainly to highway segments. Finally, for the high stress category we observe more clutter, buildings,s traffic, and pedestrians. Our expectation is that these types of characteristic patterns that we observe for the different stress measure categories can be captured by a computer vision model attempting to categorize road scene images or video sequences into the corresponding low, medium, or high stress categories.

Fig. 5: Examples of frame sequences for low (a), medium (b), and high (c) stress measure classes.

To avoid the habituation effect that could be induced due to the repetition of the experiment and that might affect the perceived stress, in our experiments we only considered the video recordings corresponding to the first drive of each participant. Thus only these first 9 drives were selected and they correspond to the following participants codes: 1.Drv1-1, 2.Drv2-1, 3.Drv3-1, 4.Drv4-1, 5.Drv5-1, 7.Drv6-1, 9.Drv7-1, 10.Drv8-1, and 11.Drv9-1. For video modelling, the original video sequences, taken at 25fps, were reduced to 2fps.

Iv Methodology

This section describes the different methods we use throughout the paper. First we provide details about the data splits used for the training, validation and test sets. Then, we explain how we perform automatic object segmentation of road scene images. Later, we describe the three different approaches we use for the modelling. Finally, we explain the interpretability techniques we use to qualitatively relate urban image patterns and classification scores.

Iv-a Data Splits

All modelling experiments consider a total of 9 data splits, each leaving out one driver as later testing data. Then, among the remaining 8 drivers, the videos of 2 randomly selected drivers are used for the validation set, while the videos of the remaining 6 drivers are used to train. Each of these data splits is denoted by , for indexes corresponding to the experiment ID in the original AffectiveROAD dataset. Notice that with this data-split protocol we evaluate to what extent the modeling of the human-driver-provided stress measure generalizes to unseen drivers. For reproducibility, the validation and testing drivers for each data split are provided in Tab. I (at each data split the remaining 6 drivers are used for training).

Test ID Validation Drivers Testing Drivers
2.Drv2-1, 10.Drv8-1 1.Drv1-1
3.Drv3-1, 11.Drv9-1 2.Drv2-1
1.Drv1-1, 9.Drv7-1 3.Drv3-1
9.Drv7-1, 2.Drv2-1 4.Drv4-1
1.Drv1-1, 11.Drv9-1 5.Drv5-1
4.Drv4-1, 10.Drv8-1 7.Drv6-1
3.Drv3-1, 5.Drv5-1 9.Drv7-1
7.Drv6-1, 5.Drv5-1 10.Drv8-1
3.Drv3-1, 4.Drv4-1 11.Drv9-1
TABLE I: Data split sets using for the different training experiments

The distribution of low, medium, and high stress classes in each of the data splits is, approximately, , , and , respectively. The dataset has a total of almost 110K instances. In our experiments we balanced the three classes by upsampling the training dataset, each class to the class with more examples, and downsampling the validation and test dataset each class to the class with less examples. The resulting test sets are thus all balanced so that a random choice classifier would score accuracy on average.

Iv-B Segmentation of urban scenes

To gain understanding about the impact the road scene has on the human-driver-provided stress, we attempt to identify which objects are present in the visual scene and where they are located. For this goal we require to semantically segment each dataset video frame. Semantic segmentation assigns an object category label to each pixel of an input image. We segmented the images using the Inplace-ABN (DeepLabV3+WideResNet-38) implementation [5], already trained with the Mapillary Vistas dataset [28]. Mapillary Vistas dataset is a large-scale diverse street-view image dataset for semantic urban understanding, containing 25k high resolution images with pixel-accurate annotations of 66 semantic urban objects categories. The Mapillary Vistas dataset has a global coverage and very diverse selection of images considering different weather and seasonal conditions, points of view, and camera models. Annotated objects included categories like sky, road, building, sidewalk, person, and car, among others. Some random examples of images and corresponding segmentations are shown in Fig. 6.

Fig. 6: Examples of original images with their segmentation mask.

Iv-C Modelling Approaches

To predict driver’s stress from their video recordings, we use three different modelling approaches. Two of them take as input a single video frame, and the goal is to estimate the stress measure class for the specific time stamp corresponding to the input frame. The third approach considers instead a video sequence of seconds as input, and the goal is to estimate the stress measure class at the time stamp corresponding to the last input frame. Thus, in the third case, the model is using the current time stamp frame and some previous frames to infer the stress measure class. For a fair comparison, all modelling approaches were trained with the same number of samples. In the case of the second and the third models, we created a dataset of video sequences with length n = 32 seconds. While the third model uses the video sequences, varying the sequence length from 1 to 32 seconds, the second model uses only the last frame from each sequence.

Iv-C1 Image classification with object presence features

Our first modelling approach encodes the visual information at each time stamp as a 66-dimension feature vector. Each feature corresponds to one of the object categories included in the segmentation model. More concretely, each feature encodes the area occupied by the corresponding object in the image. With this feature vector we train different classifiers to estimate the stress measure category of the corresponding frame. In particular, we trained Random Forest, Linear SVM, and RBF-SVM using the SciKit-learn Python library.

Iv-C2 End-to-end image classification

Our second approach consists of an end-to-end Convolutional Neural Network (CNN) that takes as input a video frame to infer the human-driver-provided stress measure category corresponding to the same time stamp. In particular, we use a VGG-16

[36], pretrained with Places-365 dataset [41]

. Additionally, to add complexity to the model, after the convolutional layers, we added two consecutive fully connected layers (of 512 hidden units) followed each by a dropout layer of 0.5. Lastly, a fully connected layer with 3 hidden units (one per class) was added as a prediction layer, with softmax activation function. At the training phase, which considers the cross-entropy loss, all layers were frozen except the last convolutional block of the VGG-16 and the newly added layers.

Iv-C3 End-to-end video classification

Fig. 7: Temporal Segment Network Architecture.

Our third approach to model the inference of the human-driver-provided stress measure category is based on Temporal Segment Networks (TSN) [37]. TSN is a video-level framework that was originally proposed for action recognition in videos. TSN aims to model temporal data with segment-based sampling together with an aggregation module called consensus. The extra information provided by the video sequence is important to increase the accuracy of classification tasks in several setting, including ours. Formally, applied to our problem, each input sequence (video recordings of a driving), , is divided into segments of equal length . Then, each segment is represented by its first frame, . Let represent a CNN with parameters , which operates on frame to produce an individual prediction. Let function represent the aggregation consensus that combines the outputs obtained for each frame and let function provide the final classification for input sequence . Then the set of all are directly considered by the TSN as follows:

(1)

The implementation of the TSN architecture is shown in Fig. 7. For function we used the VGG-16 as defined in our image end-to-end model (see Sect.IV-C2) and for the consensus module, we used the average. In the training phase, all VGG-16 segments shared the same weights and their training process was equivalent to the one described in Sect.IV-C2.

Other implementation details include: we used RMSprop as the optimizer algorithm to learn the network parameters. The batch size was set to 4 videos and the learning rate parameter was set to 10e-5. All the convolutional networks from the TSN segments were initialized with a VGG16 pretrained with the places-365 dataset. Most of the trainings converged at 10 epochs. For running the experiments, we used a GPU Tesla P100 of 16GB RAM, and 32GB for physical RAM. The approach was programmed in Python, using the Deep Learning framework TensorFlow 2.3.

Iv-D CNN interpretability with Class Activation Maps

Class Activation Mapping (CAM) [40] and related interpretability approaches, such as gradient-weighted CAM (GradCAM [34]), are used to visually interpret the output of a CNN. Concretely, CAM heat maps highlight the regions of the image that contributed the most for the classification score for a specific class. While the original formulation of CAM can be just applied to fully convolutional models, GradCAM can be applied even in the presence of fully connected layers after before the output. In this study we use GradCAM to visualize the most informative frame regions for our video CNN Model.

V Experiments

This section presents the conducted experiments. First, we study how the urban scene composition relates to the stress measure and show the results obtained with the three modelling approaches described in Sect. IV. Then we present an interpretability analysis using Class Activation Maps.

Fig. 8: Object’s over/under representation level in images tagged with different stress levels. We first obtain the average object occupancy of an object , , by (1) segmenting each image in our data (see Sect. IV-B), (2) computing the fraction this object represents over the total image and (3) averaging over all images. Then, for each category, , the average occupancy was calculated for each object, . The plot shows each object’s mean on category with respect to each object’s global mean, . Values larger than one indicate the object is over-represented with respect to the global mean and a value lower than one indicates under-representation. Objects have been manually sorted to facilitate interpretability.

V-a Stress measure and road scene composition

We start the experimental section by analyzing which objects are over and under-represented on images related to different stress levels. Considering the automatic image segmentation (as described in Sect. IV), we compute the ratio between the average presence of each object in low/medium/high stress images with respect to the average presence of the same object across all images. Results are shown in Fig. 8. Notice that when the object ratio is 1 it means that its presence in the given stress class does not differ from the average presence. Ratios larger or smaller than 1 indicate, respectively, more or less object presence than the average. As we can see, each stress condition could be characterized by the over-representation of between 4 to 5 urban objects.

The parking, fence, crosswalk, and trash can classes are expected to over-represent the low-stress level, since these objects are frequent in the parking lot and Z zone areas. In these settings, the car was either parked before starting the drive, or slowly moving in the Z zone, which is supposed to induce lower stress than more complex settings. On the other hand, according to the findings of [6], one would expect to find the traffic lights and pedestrian among the urban objects characterizing city segments which are supposed to induce a high-stress level. However, for this set of data, both categories over-represent the low-stress level. After a secondary examination of the images corresponding to low-stress level, we noticed two particularities in the AffectiveROAD dataset that explain this observation. First, several pedestrians, sometimes a group of them, are walking around, not necessarily crossing the street, particularly in the Z zone. Second, some of the traffic light poles are short, making them to occupy the majority of the image, leading to an over representation of the traffic light category in the low stress class.

For the medium-stress level, we find those categories that typically define highway driving and related elements, like ramps (entrance or exit road): tunnel, guard rail, bridge, terrain and traffic signs are the main over-represented objects.

For the high-stress level, we find objects like motorcycle and bicycle, rider, banner, big vehicles and miscellaneous. Riders are the persons on motorcycles and bicycles. This explains why the values of the ratio are almost the same for both rider and motorcycle & bicycle categories. Banners were present in the city areas. Miscellaneous includes several object categories present in the videos such as bench, ground animal, mailbox, potholes, catch basin, junction box, among others. This means that multiple visible objects and multiple interacting vehicles coexist in the same scene, as found in congested urban environments.

Overall, the representation of objects found per stress category validates the assumptions used in earlier studies that parking, highway, and city driving conditions are associated with low, medium, and high stress levels, respectively.

V-B Modelling experiments

Method Avg.
Object presence – Random Forest 0.51 0.57 0.7 0.68 0.71 0.71 0.61 0.66 0.64 0.64
Object presence – Linear SVM 0.52 0.54 0.65 0.57 0.61 0.6 0.58 0.63 0.61 0.59
Object presence – RBF-SVM 0.48 0.51 0.65 0.56 0.65 0.61 0.57 0.62 0.62 0.58
Single frame – CNN 0.56 0.62 0.72 0.71 0.73 0.73 0.64 0.74 0.70 0.68
Video sequence – TSN 0.6 0.66 0.78 0.87 0.72 0.78 0.68 0.71 0.72 0.72
TABLE II: Results for each driver evaluated at test.

V-B1 Image classification with object presence features

Our first modelling experiments consist of using classical machine learning methods on image features that encode the presence of objects in the image, as described in Sect. IV-C1. Tab. II shows the accuracy obtained in each of the data splits and the corresponding average (see rows 1, 2 and 3). We observe that the accuracy obtained is significantly above chance (which would be ). This effort shows how a simple representation of the visual information can give relevant insights about driver self-reported stress. Notice that these results are supported by the observations reported in Sect. V-A, suggesting that the appearance of particular objects in the driving scene may be a visual signature associated with stress.

V-B2 End-to-end image classification

The results obtained by the end-to-end image CNN approach described in Sect.IV-C2 are shown in Tab. II, row 4. We observe, as expected, that the image CNN approach outperforms the classical ML methods tested before. Interestingly, we can see that the accuracy varies across the different drivers. The lowest accuracies are obtained for the , , and data splits. This can be explained by the fact that some drivers’ perceived stress is significantly different from that of others.

Fig. 9: Average confusion matrices computed on (a) the test sets for the Image CNN Model and (b) the video TSN Model.

Fig. 10: Class Activation Maps for two random examples of video sequences for low (a), medium (b), and high (c) stress measure classes.

Notice that and involve test data related to two drivers Drv1 and Drv2, respectively. Drv1 repeated the experiment three times and Drv2 repeated the experiment twice. While the annotation by the observer was done in real-time, the validation and correction of the stress metric by the driver was done on different days. The validation session of the first drive could have happened after the second drive for both of these drivers, which might affect their recollection of their stress levels. considers the data of the participant Drv7 for the testing, and we learned that Drv7 lived in a foreign country for years and may have had a culturally different perception of the rated stress111We consulted the authors of the database for these details.. More data are needed to make general conclusions about the possible impact of these factors.

Figure 9

(a) shows the average confusion matrix for the image CNN model. Examples in the low-stress class that are wrongly classified are confused with medium and high in an even manner. On the contrary, we see that wrongly classified examples in the medium and high classes are rarely confused with the low-stress class. Wrongly classified medium-stress examples are usually classified as high, while wrongly classified high-stress examples are usually classified as medium. Our explanation is that medium-stress and high-stress examples are visually more similar than low-stress examples.

V-B3 End-to-end video classification

We trained different TSN models (see Sect.IV-C3) by considering different video sequences of length seconds to seconds and . Results show increments on the accuracy up to a time window of seconds, where we obtain the maximum mean accuracy of . After that point, accuracy steadily decreases as we keep increasing . Our thinking is that windows too short may not capture enough temporal contextual information, too large may provide too sparse of temporal contextual information while windows, while 20 seconds may be ”just right” for discriminating driving events corresponding to varied levels of driver stress.

Fig. 9. (b) shows the average confusion matrix computed across all of the test sets. Results and observations are similar to the ones obtained without considering temporal information: The highest confusion happens between medium and high classes. As we would expect, since the video model contains also temporal data, the correct classifications are higher than the ones obtained with the image CNN model.

Finally, Fig. 10 shows CAM visualizations for the video model. Common to all examples, it is interesting to see the general tendency to focus on the central part of the image, the region where the road is located. This fact is of special interest since it is also central for human driving tasks [29] which may explain the fixation of the CNN on objects in that region. Now, comparing the CNN fixation maps obtained for different stress levels, we highlight the increasing complexity of the CAM for images with higher levels of stress. As elaborated in the introduction, increasing complexity within the driving task usually results in higher levels of stress. Our Class Activation Mappings also seem to indicate that the CNN requires more complex analysis to classify a scene as highly stressful. This fact is also compatible with recent findings relating urban safety with scene disorder [6].

Vi Conclusions

In this paper, we hypothesized that drivers’ subjective stress levels might be estimated directly from automated visual analysis of the driving scene. Using the driving scene videos and the corresponding human-driver-provided stress measures of the AffectiveROAD dataset, we trained three different Machine Learning approaches to estimate driver stress from the driving scenes. The best average accuracy across testing sets was obtained using a video CNN, beating a random model of and obtaining on leave-one-person out test data. While more work is needed to test on larger and more diverse data sets, representing the variety of cultural and individual differences, these results suggest that the automated analysis of environmental context may contribute significantly to inferring driver affective states during real-world driving conditions.

Acknowledgment

This work was partially supported by the Spanish Ministry of Science, Innovation and Universities, TIN2015-66951-C2-545-2-R and RTI2018-095232-B-C22. CB is supported by a PhD grant from the Universitat Oberta de Catalunya (UOC). We thank the CEA-LinkLab for providing us with the videos and details on the cohort involved in the study.

References

  • [1] AffectiveROAD data (2018). Note: https://www.media.mit.edu/groups/affective-computing/data/Accessed: 2021-04-03 Cited by: §I, §III.
  • [2] O. V. Bitkina, J. Kim, J. Park, J. Park, and H. K. Kim (2019) Identifying traffic context using driving stress: a longitudinal preliminary case study. Sensors 19 (9), pp. 2152. Cited by: §I, §II.
  • [3] L. Bowen, S. L. Budden, and A. P. Smith (2020) Factors underpinning unsafe driving: a systematic literature review of car drivers. Transportation Research Part F: Traffic Psychology and Behaviour 72, pp. 184–210. Cited by: §II.
  • [4] M. Braun, J. Schubert, B. Pfleging, and F. Alt (2019) Improving driver emotions with affective strategies. Multimodal Technologies and Interaction 3 (1), pp. 21. Cited by: §II.
  • [5] S. R. Bulo, L. Porzi, and P. Kontschieder (2018) In-place activated batchnorm for memory-optimized training of dnns. In

    Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition

    ,
    pp. 5639–5647. Cited by: §IV-B.
  • [6] C. Bustos, D. Rhoads, A. Solé-Ribalta, D. Masip, A. Arenas, A. Lapedriza, and J. Borge-Holthoefer (2021) Explainable, automated urban interventions to improve pedestrian and vehicle safety. Transportation Research Part C: Emerging Technologies 125, pp. 103018. Cited by: §II, §V-A, §V-B3.
  • [7] W. Chung, T. Chong, and B. Lee (2019) Methods to detect and reduce driver stress: a review. Int. Journal of Automotive Technology 20 (5), pp. 1051–1063. Cited by: §I, §II.
  • [8] L. Ding, M. Glazer, M. Wang, B. Mehler, B. Reimer, and L. Fridman MIT-avt clustered driving scene dataset: evaluating perception systems in real-world naturalistic driving scenarios. In 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 232–237. Cited by: §II.
  • [9] N. El Haouij (2018) Biosignals for driver’s stress level assessment: functional variable selection and fractal characterization. Ph.D. Thesis, Université Paris-Saclay (ComUE). Cited by: §I.
  • [10] N. Elhaouij, J. Poggi, S. Sevestre-Ghalila, R. Ghozi, and M. Jaïdane (2018) AffectiveROAD system and database to assess driver’s attention. In Proc. of the 33rd Annual ACM Symposium on Applied Computing, pp. 800–803. Cited by: §I, §I, §II, §III-A, §III-B.
  • [11] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: §II.
  • [12] J. A. Healey and R. W. Picard (2005) Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. on Intelligent Transportation Systems 6 (2), pp. 156–166. Cited by: §I, §I, §II.
  • [13] D. A. Hennessy and D. L. Wiesenthal (1997) The relationship between traffic congestion, driver stress and direct versus indirect coping behaviours. Ergonomics 40 (3), pp. 348–361. Cited by: §I, §II.
  • [14] D. A. Hennessy and D. L. Wiesenthal (1999) Traffic congestion, driver stress, and driver aggression. Aggressive Behavior: Official Journal of the International Society for Research on Aggression 25 (6), pp. 409–423. Cited by: §I.
  • [15] J. Hernandez, D. McDuff, X. Benavides, J. Amores, P. Maes, and R. Picard (2014) AutoEmotive: bringing empathy to the driving experience to manage stress. In Proc. of the 2014 Companion Publication on Designing Interactive Systems, pp. 53–56. Cited by: §II.
  • [16] J. D. Hill and L. N. Boyle (2007) Driver stress as influenced by driving maneuvers and roadway conditions. Transportation Research Part F: Traffic Psychology and Behaviour 10 (3), pp. 177–186. Cited by: §I, §II.
  • [17] K. Ihme, C. Dömeland, M. Freese, and M. Jipp (2018) Frustration in the face of the driver: a simulator study on facial muscle activity during frustrated driving. Interaction Studies 19 (3), pp. 487–498. Cited by: §II.
  • [18] M. Jeon, B. N. Walker, and J. Yim (2014) Effects of specific emotions on subjective judgment, driving performance, and perceived workload. Transportation research part F: traffic psychology and behaviour 24, pp. 197–209. Cited by: §I.
  • [19] M. Jeon (2016) Don’t cry while you’re driving: sad driving is as bad as angry driving. Int. Journal of Human–Computer Interaction 32 (10), pp. 777–790. Cited by: §I, §II.
  • [20] T. Kato, H. Kawanaka, M. S. Bhuiyan, and K. Oguri (2011) Classification of positive and negative emotion evoked by traffic jam based on electrocardiogram (ecg) and pulse wave. In 14th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC), pp. 1217–1222. Cited by: §II.
  • [21] K. Laumann, T. Gärling, and K. M. Stormark (2003) Selective attention and heart rate responses to natural and urban environments. Journal of environmental psychology 23 (2), pp. 125–134. Cited by: §I.
  • [22] Y. Liu and S. Du (2018) Psychological stress level detection based on electrodermal activity. Behavioural brain research 341, pp. 50–53. Cited by: §I, §II.
  • [23] N. Lyu, L. Xie, C. Wu, Q. Fu, and C. Deng (2017) Driver’s cognitive workload and driving performance under traffic sign information exposure in complex environments: a case study of the highways in china. Int. journal of environmental research and public health 14 (2), pp. 203. Cited by: §I.
  • [24] V. C. Magaña, W. D. Scherz, R. Seepold, N. M. Madrid, X. G. Pañeda, and R. Garcia (2020) The effects of the driver’s mental state and passenger compartment conditions on driving performance and driving stress. Sensors 20 (18), pp. 5274. Cited by: §I, §II.
  • [25] B. Mehler, B. Reimer, J. F. Coughlin, and J. A. Dusek (2009) Impact of incremental increases in cognitive workload on physiological arousal and performance in young adult drivers. Transportation Research Record 2138 (1), pp. 6–12. Cited by: §I.
  • [26] C. Nass, I. Jonsson, H. Harris, B. Reaves, J. Endo, S. Brave, and L. Takayama (2005) Improving automotive safety by pairing driver emotion and car voice emotion. In CHI’05 extended abstracts on Human factors in computing systems, pp. 1973–1976. Cited by: §II.
  • [27] A. Němcová, V. Svozilová, K. Bucsuházy, R. Smíšek, M. Mézl, B. Hesko, M. Belák, M. Bilík, P. Maxera, M. Seitl, et al. (2020) Multimodal features for detection of driver stress and fatigue. IEEE Trans. on Intelligent Transportation Systems. Cited by: §II.
  • [28] G. Neuhold, T. Ollmann, S. Rota Bulo, and P. Kontschieder (2017) The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE Int. Conf. on Computer Vision, pp. 4990–4999. Cited by: §IV-B.
  • [29] A. Palazzi, D. Abati, F. Solera, R. Cucchiara, et al. (2018) Predicting the driver’s focus of attention: the dr (eye) ve project. IEEE Trans. on Pattern Analysis and Machine Intelligence 41 (7), pp. 1720–1733. Cited by: §V-B3.
  • [30] P. E. Paredes, Y. Zhou, N. A. Hamdan, S. Balters, E. Murnane, W. Ju, and J. A. Landay (2018) Just breathe: in-car interventions for guided slow breathing. Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2 (1), pp. 1–23. Cited by: §II.
  • [31] M. N. Rastgoo, B. Nakisa, A. Rakotonirainy, V. Chandran, and D. Tjondronegoro (2018) A critical review of proactive detection of driver stress levels based on multimodal measurements. ACM Computing Surveys (CSUR) 51 (5), pp. 1–35. Cited by: §II.
  • [32] S. Ren, K. He, R. Girshick, and J. Sun (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans. on Pattern Analysis and Machine Intelligence 39 (6), pp. 1137–1149. Cited by: §II.
  • [33] M. Rimini-Doering, D. Manstetten, T. Altmueller, U. Ladstaetter, and M. Mahler (2001) Monitoring driver drowsiness and stress in a driving simulator. Cited by: §II.
  • [34] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proc. of the IEEE Int. Conf. on Computer Vision, pp. 618–626. Cited by: §I, §IV-D.
  • [35] F. Simon and C. Corbett (1996) Road traffic offending, stress, age, and accident history among male and female drivers. Ergonomics 39 (5), pp. 757–780. Cited by: §II.
  • [36] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §IV-C2.
  • [37] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In European Conf. on Computer Vision, pp. 20–36. Cited by: §IV-C3.
  • [38] S. Zepf, J. Hernandez, A. Schmitt, W. Minker, and R. W. Picard (2020-06) Driver emotion recognition for intelligent vehicles: a survey. ACM Comput. Surv. 53 (3). External Links: ISSN 0360-0300, Document Cited by: §II, §II, §II.
  • [39] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2881–2890. Cited by: §II.
  • [40] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    .
    In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2921–2929. Cited by: §IV-D.
  • [41] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017)

    Places: a 10 million image database for scene recognition

    .
    IEEE trans. on pattern analysis and machine intelligence 40 (6), pp. 1452–1464. Cited by: §II, §IV-C2.