Log In Sign Up

Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture

by   Muhammad Muzammel, et al.

Buses and heavy vehicles have more blind spots compared to cars and other road vehicles due to their large sizes. Therefore, accidents caused by these heavy vehicles are more fatal and result in severe injuries to other road users. These possible blind-spot collisions can be identified early using vision-based object detection approaches. Yet, the existing state-of-the-art vision-based object detection models rely heavily on a single feature descriptor for making decisions. In this research, the design of two convolutional neural networks (CNNs) based on high-level feature descriptors and their integration with faster R-CNN is proposed to detect blind-spot collisions for heavy vehicles. Moreover, a fusion approach is proposed to integrate two pre-trained networks (i.e., Resnet 50 and Resnet 101) for extracting high level features for blind-spot vehicle detection. The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods. Both approaches are validated on a self-recorded blind-spot vehicle detection dataset for buses and an online LISA dataset for vehicle detection. For both proposed approaches, a false detection rate (FDR) of 3.05 recorded dataset, making these approaches suitable for real time applications.


page 6

page 7

page 8

page 11

page 12

page 13


Computer Vision-based Accident Detection in Traffic Surveillance

Computer vision-based accident detection through video surveillance has ...

Wide-Residual-Inception Networks for Real-time Object Detection

Since convolutional neural network(CNN)models emerged,several tasks in c...

Detection of E-scooter Riders in Naturalistic Scenes

E-scooters have become ubiquitous vehicles in major cities around the wo...

LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery

State-of-the-art object detection approaches such as Fast/Faster R-CNN, ...

Automatic detection of passable roads after floods in remote sensed and social media data

This paper addresses the problem of floods classification and floods aft...

Computer Vision for a Camel-Vehicle Collision Mitigation System

As the population grows and more land is being used for urbanization, ec...

1 Introduction

Although bus accidents are very rare around the globe, there are still approximately 60,000 buses are involved in traffic accidents in the United States every year. These accidents lead to 14,000 non-fatal injuries and 300 fatal injuries Feng et al. (2016). Similarly, every year in Europe approximately 20,000 buses are involved in accidents that cause approximately 30,000 (fatal and non-fatal) injuries Evgenikos et al. (2016). These accidents mostly occurred due to thrill-seeking driving, speeding, fatigue, stress, and aggressive driver behaviors Öz et al. (2010); Useche et al. (2019). Accidents involving buses and other road users, such as pedestrians, bicyclists, motorcyclists, or car drivers and passengers, usually cause more severe injuries to these road users Craig et al. (2016); Charters et al. (2018); Orsi et al. (2017); Waseem et al. (2019).

The collision detection systems of cars mostly focus on front and rear end collision scenarios Elimalech and Stein (2020); Lee et al. (2018); Ra et al. (2018); Zhao et al. (2019). In addition, different drowsiness detection techniques have been proposed to detect car drivers’ sleep deprivation and prevent possible collisions Abraham et al. (2018); Shameen et al. (2018). At the same time, buses operate in a complicated environment where a significant number of unintended obstacles such as pulling out from bus stops, passengers unloading, pedestrians crossing in front of buses, and bus stop structures, etc. McNeil et al. (2002); Pecheux et al. (2016); Wei et al. (2014), are present. Additionally, buses have higher chances of side collisions due to constrained spaces and maneuverability McNeil et al. (2002). Especially at turns, researchers found that the task demand on bus drivers is very high Pecheux et al. (2016); Wei et al. (2014).

Further, heavy vehicles and buses, which have more blind spots compared to cars and other road users in these environments, are at higher risks of collisions Prati et al. (2018); Silla et al. (2017); Frampton and Millington (2022). Improvements to heavy vehicle and bus safety have been initiated by many countries through the installation of additional mirrors. Yet, there are still some blind-spot areas where drivers cannot see other road users Girbes et al. (2016); Girbés et al. (2016). In addition, buses may have many passengers on board. A significant number of on-board passenger incidents have been reported due to sudden braking or stopping Zhang et al. (2000). These challenges may entail different requirements for collision detections for public/transit buses than for cars. A blind-spot collision detection system can be designed for buses to predict impending collisions in their proximity and to reduce operational interruptions. It could provide adequate time for the driver to smoothly push the brake or take any other precautionary measures to avoid such imminent collision threats as well as avoid injuries and trauma inside the bus.

Over the past few years, many types of collision detection techniques have been proposed Wisultschew et al. (2021); Muzammel et al. (2017); Elimalech and Stein (2020); Lee et al. (2018); Goodall et al. (2022). Among these, vision-based collision detection techniques provide reliable detection of vehicles across a large area Elimalech and Stein (2020); Lee et al. (2018); Goodall et al. (2022). This is due to cameras that provide a wide field of view. Several vision-based blind-spot collision detection techniques for cars and other vehicles have been proposed Elimalech and Stein (2020); Lee et al. (2018); Ra et al. (2018); Zhao et al. (2019); Tseng et al. (2014); Wu et al. (2013); Singh et al. (2014). In vision-based techniques, the position of the camera plays a significant role. Depending on the position of the installed camera, vision-based blind-spot collision detection systems are categorized as rear camera based Ra et al. (2018); Dooley et al. (2015) or side camera-based systems Tseng et al. (2014); Wu et al. (2013); Singh et al. (2014); Goodall et al. (2022). Rear camera-based vision systems detect vehicles by acquiring the images from a rear fish-eye camera. The major drawback of using a rear fish-eye camera is that the captured vehicle suffers from severe radial distortions, leading to huge differences in appearance for different positions Ra et al. (2018).

In contrast, side camera-based vision systems have the camera installed directly at the bottom or next to the side mirrors that directly face the blind spot and detect the approaching vehicles. In these systems, the vehicle appearance drastically changes with its position; yet, it has the advantage of high resolution images for vehicle detection Ra et al. (2018).

In vision-based blind-spot vehicle techniques, deep convolutional neural network (CNN) models often achieve better performance Lee et al. (2018); Zhao et al. (2019)

compared to conventional machine learning models (based on the appearance, histogram of oriented gradients (HOG) features, etc.)

Ra et al. (2018); Tseng et al. (2014); Wu et al. (2013). This is due to convolutional layers that can extract and learn more pure features from the raw RGB channels than traditional algorithms such as HOG. However, blind-spot vehicle detection is still challenging on account of the large variations in appearance and structure, especially ubiquitous occlusions that further increase the intra-class variations.

Recently, deep learning techniques prove to be a game changer in object detection. Many deep learning models have been proposed to detect different types and sizes of objects in images Ren et al. (2017); Redmon et al. (2016); Redmon and Farhadi (2017). Among these models, two-stage object detectors show better accuracy compared to one-stage object detectors Lin et al. (2017); Huang et al. (2017); Du et al. (2020). Therefore, two-stage object detectors, such as faster R-CNN Ren et al. (2017), seem to be more suitable for blind-spot vehicle detection. In faster R-CNN, a self-designed CNN or a pre-trained network (such as VGG16, ResNet50, and ResNet-101, etc.) is used to extract a feature map Theckedath and Sedamkar (2020); He et al. (2016)

. These networks are trained on a large dataset and are proven to be better in performance compared to simple convolutional neural networks (CNNs). In medical applications, it has been reported that multi-CNNs performed much better in residual feature extraction and classification compared to single CNNs

Yang et al. (2017); Muzammel et al. (2021); Mendels et al. (2017).

In this paper, we propose a novel blind-spot vehicle detection technique for commercial vehicles based on multi convolutional neural networks (CNNs) and faster R-CNN. Two different convolutional neural network-based approaches/models with faster R-CNN as an object detector are proposed for blind-spot vehicle detection. In the first approach/model, two self designed CNNs networks are used to extract the features, and their outputs are concatenated and fed to another self designed CNN. Next, faster R-CNN uses these high-level features for vehicle detection. In the second approach/model, two ResNet CNN networks (ResNet-50 and ResNet-101) are concatenated with the self-designed CNN to extract features. Finally, these extracted features are fed to the faster R-CNN for blind-spot vehicle detection. The scientific contributions of this research are as follows:

  1. Design of two high-level CNN based feature descriptors for blind-spot vehicle detection for heavy vehicles;

  2. Design of fusion technique for different high level feature descriptors and its integration with the faster R-CNN. In addition, performance comparison with existing state-of-the-art approaches;

  3. Introduction of fusion technique for pre-trained high-level feature descriptors for object detection application.

2 Related Work

The recent deep convolutional neural network (CNN) based algorithms depict extraordinary performance in various vision tasks Krizhevsky et al. (2017); Karpathy et al. (2014); Guo et al. (2014); Cui et al. (2014). Convolutional neural networks extract features from the raw images through a large amount of training with high flexibility and generalization capabilities. The first CNN based object detection and classification system was presented in 2013 Han et al. (2018); Sermanet et al. (2013). Up to now, many deep learning-based object detection and classification models have been proposed, including region based convolutional neural network (R-CNN) Girshick et al. (2014), fast R-CNN Girshick (2015), faster R-CNN Ren et al. (2017), single shot multibox detector (SSD) Liu et al. (2016), R-FCN Dai et al. , you only look once (YOLO) Redmon et al. (2016), and YOLOv2 Redmon and Farhadi (2017).

R-CNN models achieve promising detection performance and are a commonly employed paradigm for object detection Girshick et al. (2014). They have essential steps, such as object regional proposal generation with selective search (SS), CNN feature extraction, selected objects classification, and regression based on the obtained CNN features. However, there are large time and computation costs to train the network due to repeated extraction of CNN features for thousands of object proposals Chu et al. (2017).

In fast R-CNN Girshick (2015), the feature extraction process is accelerated by sharing the forward pass computation. Due to the regional proposal generation by selective search (SS), it still appears to be slow and requires significant computational capacity to train it. In faster R-CNN Ren et al. (2017), “regional proposal generation using SS” was replaced by the "proposal generation using CNN”. This increases the computational capacity of the network and makes it efficient and quick compared to the R-CNN and fast R-CNN.

YOLO Redmon et al. (2016)

frame object detection is a regression problem to separate bounding boxes and associated class probabilities. In YOLO, a single CNN predicts the bounding boxes and class probabilities for these boxes. It utilizes a custom network based on the GoogLeNet architecture. An improved model called YOLOv2

Redmon and Farhadi (2017)

achieves comparable results on standard tasks. YOLOv2 employs a new model called Darknet-19, which has 19 convolutional layers and 5 max-pooling layers. This new model only takes 5.58 s to compute results. However, the YOLOv2 network still lacks some important elements, it has no residual blocks, no skip connections, and no up-sampling, etc.

The YOLOv3 network is the advanced version of YOLOv2 and incorporates all of these important elements. YOLOv3 is a 53 layer network trained on Imagenet. For object detection, YOLOv3 has 53 more layers stacked onto it and gives us a 106 layer fully convolutional underlying architecture

Redmon and Farhadi (2018). Recently, two new versions of YOLO were introduced, named YOLOv4 and YOLOv5, respectively Bochkovskiy et al. (2020); Jocher et al. (2020). Other than YOLO, there are also other one-stage object detectors, such as SSD Liu et al. (2016) and RetinaNet Lin et al. (2017).

Recent studies show that two-stage object detectors obtained better accuracy compared to one-stage object detectors Lin et al. (2017); Huang et al. (2017), thus, making faster R-CNN a suitable candidate for blind-spot vehicle detection. However, in these object detectors, the whole system accuracy is profoundly dependent on the feature set obtained from the neural networks. In recent object detectors, it has also been proposed to collect features from different stages of the neural network to improve the system performance Lin et al. (2017); Tan et al. (2020). In medical applications, it has been demonstrated that the usage of multiple feature extractors can significantly improve system accuracy Yang et al. (2017); Muzammel et al. (2021); Mendels et al. (2017).

Thus, to increase system accuracy, in this research multiple CNN networks based blind-spot vehicle detection approaches are proposed. Along with the fusion of a self-designed convolutional neural network, system performance is also investigated using a fusion approach for pre-trained convolutional neural networks.

3 Proposed Methodology

The proposed methodology comprises several steps, including pre-processing of datasets, anchor boxes estimation, data augmentation, and multi CNN network design, as shown in Figure


Figure 1: Steps of the proposed approaches to detect blind-spot vehicles using faster R-CNN object detection.

3.1 Pre-Processing

For the self-recorded dataset, image labels were created using MATLAB 2019a “Ground Truth Labeller App”, whereas for the online dataset, ground truths were provided with the image set. Next, images were resized to 224 224 3 to enhance the computation performance of the proposed deep neural networks.

3.2 Anchor Boxes Estimation

Anchor boxes are important parameters of deep learning object recognition. The shape, scale, and the number of anchor boxes impact the efficiency and accuracy of the object detector. Figure 2 indicates the plot of aspect ratio and box area of the self-recorded dataset.

Figure 2: Anchor boxes plot to identify sizes and shapes of different vehicles for faster R-CNN object detection. Each blue circle indicates the label box area versus the label box aspect ratio.

The anchor boxes plot reveals that many vehicles have a similar size and shape. However, vehicle shapes are still spread out, indicating the difficulty of choosing anchor boxes manually. Therefore, a clustering algorithm presented in Redmon and Farhadi (2017) was used to estimate anchor boxes. It groups similar boxes together using a meaningful metric.

3.3 Data Augmentation

In this work, data augmentation is performed to minimize the over-fitting problem and to improve the proposed network’s robustness against noise. Random brightness augmentation technique is considered to perturb the images. The brightness of the images is augmented by randomly darkening and brightening the images. The darkening and brightening values randomly range from [0.5, 1.0] and [1.0, 1.5], respectively.

3.4 Proposed CNNs and Their Integration with Faster R-CNN

Initially, the same images are fed to two different deep learning networks to extract high-level features. Subsequently, these high-level features are fed to another CNN architecture to combine and smooth these features. Finally, faster R-CNN based object detection is performed to detect impending collisions. The layer wise connection of deep learning architectures and their integration with faster R-CNN are shown in Figure 3.

Figure 3: Layer wise integration of proposed models with faster R-CNN for blind-spot vehicle detection.

3.4.1 Proposed High Level Feature Descriptors Architecture

Two different approaches are used to extract deep features: (1) self-designed convolutional neural networks and (2) pre-trained convolutional networks, as shown in Figure

3. Additional details of these feature descriptors are given below.

Self-Designed High-Level Feature Descriptors

In first approach, multiple self-designed convolutional neural networks are connected with the faster R-CNN network. The layer wise connection of two self-designed CNN networks (named DConNet and VeDConNet) is shown in Figure 4. Initially, DConNet and VeDConNet are used to extract deep features, and their output is provided to the third 2D CNN architecture for the purpose of features addition and smoothness.

Figure 4: Proposed 2D CNN architectures to extract deep features for blind-spot vehicle detection.

Both DConNet and VeDConNet architectures consist of five convolutional blocks. In DConNet, all five blocks are composed of a two 2D convolutional and ReLU layers. In addition, at the end of each block there is a max-pooling layer. In VeDConNet, the initial two blocks are similar to DConNet as they consist of two 2D convolutional layers, each followed by a ReLU activation function, where a max-pooling layer is also available after the second ReLU activation function. The other three blocks of VeDConNet comprise four convolutional layers, each followed by the ReLU layer and maxpooling layer after the fourth ReLU activation function.

Pre-Trained Feature Descriptors

In the second approach, two pre-trained convolutional networks (i.e., Resnet 101 and Resnet 50) are linked with the third CNN architecture, which is further connected with the faster R-CNN network for the purpose of vehicle detection. The features obtained from ReLU Res4b22 and ReLU 40 layers of ResNet 101 and ResNet 50, respectively, as given in Figure 5.

Figure 5: Pre-trained Resnet 50 and Resnet 101 networks for extracting deep features.

3.4.2 Features Addition and Smoothness

The high level features obtained from the two self-designed/pre-trained CNN architectures are added together through the addition layer, as shown in Figure 3. Let and be the output of the first and second deep neural networks, then their addition is given as:


The addition layer is followed by the convolutional layer and ReLU activation function for the features smoothness.

3.4.3 Integration with Faster R-CNN

As shown in Figure 3

, faster R-CNN takes high level features from the ReLU layer to perform the blind-spot vehicle detection. The obtained features map is fed to region proposal network (RPN) and ROI pooling layer of the faster R-CNN. The loss function of faster R-CNN can be divided into two parts: R-CNN loss

Girshick (2015) and RPN loss Ren et al. (2017), which is shown in the equations below:


The detailed description of the faster R-CNN architecture and the above equations is given in references Girshick (2015); Ren et al. (2017); Xu et al. (2022).

4 Results and Discussion

In this section, the vehicle detection using the proposed deep learning models is discussed in detail. We compared the performance of both approaches with each other and with the state-of-the-art benchmark approaches. This section also includes the dataset description along with the details of the proposed network implementation.

4.1 Dataset

A blind-spot collision dataset was recorded by attaching cameras to the side mirrors of a bus. The placement of cameras is shown in Figure 6.

Figure 6: Cameras mounted on the bus mirrors to detect the presence of vehicles in blind spots.

The dataset was recorded in Ipoh, Seri Iskandar and along Ipoh-Lumut highway in Perak, Malaysia. Ipoh is a city in northwestern Malaysia, whereas Seri Iskandar is located about 40 km southwest of Ipoh. Universiti Teknologi PETRONAS is also located in the new township of Seri Iskandar. Data were recorded in multiple round trips from Seri Iskandar to Ipoh for different lighting conditions. In addition, data were recorded in the cities of Ipoh and Seri Iskandar for dense traffic scenarios. Moreover, Malaysia has a tropical climate and the rainfall remains high year-round, thus allowing us to easily record data in different weather conditions. Finally, a set of 3000 images from the self-recorded dataset was selected in which vehicles appeared in blind-spot areas.

To the best of our knowledge, there is no publicly available online dataset for heavy vehicles. Therefore, a publicly available online dataset named “Laboratory for Intelligent and Safe Automobiles (LISA)” Sivaraman and Trivedi (2010) for cars was used to validate the proposed method. In the LISA dataset, the camera was installed at the front of the car. The detailed description of both datasets is given in Table 1. Both datasets are divided randomly into 80% for training and 20% for testing.

Dataset Data Description Source of Recording Total Images
Self-Recorded Dataset for Blind Spot Collision Detection Different road scenarios with multiple vehicles and various traffic and lighting conditions. Bus 3000
LISA-Dense Sivaraman and Trivedi (2010) Multiple vehicles, dense traffic, daytime, highway. Car 1600
LISA-Sunny Sivaraman and Trivedi (2010) Multiple vehicles, medium traffic, daytime, highway. Car 300
LISA-Urban Sivaraman and Trivedi (2010) Single vehicle, urban scenario, cloudy morning. Car 300
Total 5200
Table 1: Utilized dataset characteristics to validate the proposed deep CNN based approaches for blind-spot collision detection.

4.2 Network Implementation Details

The proposed work was implemented on the Intel® Xeon(R) E-2124G CPU @ 3.40 GHz (installed memory 32 GB), with a NVIDIA Corporation GP104GL [Quadro P4000] graphics card. MATLAB 2019a was used as platform to investigate the proposed methodology.

In the first approach, both CNN based feature extraction architectures (i.e., DConNet and VeDConNet) have five blocks with N number of convolutional filters for each block. Therefore, the number of convolutional filters of the five blocks from the input to the output is equal to N = [64, 128, 256, 512, 512]. Moreover, after the addition layer, there was also convolution layer with a total of 512 filters. For all these convolutional layers, the filter size was 3

3, and ReLU was used as an activation function. At the same time, the stride and the pool size of the max-pooling layer was 2


In the second approach, for Resnet 101 and Resnet 50, standard weights were used. Moreover, after the addition layer, there was a convolution layer with a total of 512 filters and ReLU as an activation function.

In both approaches, we used an SGDM optimizer with a learning rate of 10 and a momentum of 0.9. The batch size was set to 20 samples, and the verbose frequency was set to 20. Negative training samples are set equal to the samples that overlap with the ground truth boxes by 0 to 0.3. However, positive training samples are set equal to the samples that overlap with the ground truth boxes by 0.6 to 1.0.

4.3 Evaluation Matrix

The existing state-of-the-art approaches measure the performance in terms of true positive rate (TPR), false detection rate (FDR), and frame rate Sivaraman and Trivedi (2010); Muzammel et al. (2017); Roychowdhury and Muppirisetty (2018); Satzoda and Trivedi (2015). Therefore, the same parameters are used to evaluate the performance of the proposed models. TPR (also known as sensitivity) is the ability to correctly detect blind-spot vehicles. FDR refers to the false blind-spot vehicle detection among the total detection incidents. Moreover, the frame rate is defined as the total number of frames processed in one second Muzammel et al. (2017). If TP, FN, and FP represent the true positive, false negative, and false positive, respectively, then the formulas for TPR and FDR are given as:


4.4 Results Analysis

The proposed approaches/models appeared to be successful in detecting the vehicles for both self-recorded and online datasets. A few of the images from blind-spot detection are shown in Figure 7.

Figure 7: Different types of vehicle detection from self-recorded dataset: (a) three different vehicles in a parallel lane with the bus; (b) one truck in a parallel lane and two cars in the opposite lane; (c) motorcycle at a certain distance; (d) motorcycle very close to the bus.

Figure 7 shows that the proposed CNN based models were successfully able to detect different types of vehicles, including light and heavy vehicles and motor bikes, in different scenarios and lighting conditions. The proposed work was successful enough to recognize multiple vehicles simultaneously, as shown in Figure 7a,b. These figures also show the presence of shadows along with the vehicles. It reveals the significance of the proposed vehicle detection algorithm, as it was capable of differentiating remarkably between real vehicles and their shadows; this leads to the notable reduction of possible false detection.

Furthermore, it is shown in Figure 7c,d that the proposed technique detects a motorcyclist approaching and driving very close to the bus. A small mistake by the bus driver in such scenarios could lead to a fatal accident. Therefore, the blind-spot collision detection systems are very important for heavy vehicles.

Similarly, vehicle detection from the online LISA dataset Sivaraman and Trivedi (2010) is shown in Figure 9. From Figure 9, our models were successfully able to detect all types of vehicles in different scenarios using the LISA dataset. Figure 9a,b show the detection of vehicles in dense scenarios. The proposed models were reliable enough to detect multiple vehicles simultaneously in dense scenarios, even in the presence of vehicle shadows on the road. Figure 9c,d exhibit the detection of vehicles on a highway, and Figure 9e,f convey the detection of vehicle in urban areas. In both figures, we can see the presence of lane markers on the road, which were successfully neglected by the proposed systems. Furthermore, Figure 9f shows a person crossing the road; this could lead to a false detection. However, our models managed to identify the vehicle and successfully differentiated between the person and vehicle. In the LISA dataset, labels were only provided for vehicles. Therefore, the proposed model only detected the vehicle.

Figure 8: Cont.
Figure 9: Vehicle detection from the online LISA dataset: (a) five vehicles detected in dense traffic scenario; (b) six vehicles detected in a dense traffic condition; (c) vehicle detection on highway; (d) vehicle detection on highway; (e) vehicle detection in urban area; and (f) differentiating vehicle and pedestrian.

The visual analysis of the true positive rate (TPR) and false detection rate (FDR) for proposed approaches against different sets of data is presented in Figure 10; the figure shows that both approaches delivered reliable outcomes for the self-recorded as well as online datasets. The TPR value obtained from the faster R-CNN with pre-trained fused (Resnet 101 and Resnet 50) high-level feature descriptors is slightly higher compared to the the faster R-CNN with the proposed fused (DConNet and VeDConNet) feature descriptors. However, the faster R-CNN with the proposed feature descriptors provides a lower value of FDR for the self-recorded dataset and gives a comparable FDR for the LISA-Urban dataset.

Figure 10: TPR (%) and FDR (%) analysis of proposed approaches for self-recorded and online LISA datasets.

The frame rate (frames per second) for each type of dataset used in both approaches is given in Table 2, which shows that the first model has a comparatively better frame rate. The pre-trained model (i.e., faster R-CNN with high-level feature descriptors of ResNet 101 and ResNet 50) took more time to compute features compared to the model presented in the first approach (i.e., faster R-CNN with high-level feature descriptors of DConNet and VeDConNet). Hence, the model presented in first approach is capable of providing significant performance for the vehicle detection scenarios where less computational time is required.

Proposed Models Dataset Frame Rate (fps)
Model 1 Self-Recorded 1.03
LISA-Dense 1.10
LISA-Urban 1.39
LISA-Sunny 1.14
Model 2 Self-Recorded 0.89
LISA-Dense 0.94
LISA-Urban 1.12
LISA-Sunny 1.00
Table 2: Analysis of both models in terms of frame rate to validate the proposed deep neural architectures.

The detailed comparisons of different parameters, including TPR, FDR, and frame rate from the existing state-of-the-art techniques and our proposed models, are presented in Table 3. In addition, the graphical representation of true positive and false detection rates (i.e., TPR and FDR) of both models and their comparisons with the existing state-of-the-art approaches are given in Figure 11.

Reference Dataset TPR (%) FDR (%) Frame Rate (fps)
Proposed Self-Recorded 98.72 3.49 0.89
Approach 2 LISA-Dense 98.06 3.12 0.94
LISA-Urban 99.45 1.67 1.12
LISA-Sunny 99.34 2.78 1.00
Proposed Self-Recorded 98.19 3.05 1.03
Approach 1 LISA-Dense 97.87 3.98 1.10
LISA-Urban 99.02 1.66 1.39
LISA-Sunny 98.89 3.17 1.14
S. Roychowdhury LISA-Urban 100.00 4.50 1.10
et al. (2018) Roychowdhury and Muppirisetty (2018) LISA-Sunny 98.00 4.10 1.10
M. Muzammel LISA-Dense 95.01 5.01 29.04
et al. (2017) Muzammel et al. (2017) LISA-Urban 94.00 6.60 25.06
LISA-Sunny 97.00 6.03 37.50
R. K. Satzoda LISA-Dense 94.50 6.80 15.50
(2016) Satzoda and Trivedi (2015) LISA-Sunny 98.00 9.00 25.40
S. Sivaraman LISA-Dense 95.00 6.40
(2010) Sivaraman and Trivedi (2010) LISA-Urban 91.70 25.50
LISA-Sunny 99.80 8.50
Table 3: Comparisons of proposed approaches with existing state-of-the-art approaches in terms of true positive rate (TPR), false detection rate (FDR), and frame rate.

From Table 3, it can be deduced that our model achieved significantly higher results compared to the existing methods (deep learning and machine learning models). The deep learning model presented by S. Roychowdhury et al. (2018) Roychowdhury and Muppirisetty (2018) was able to achieve 100% and 98% TPR for LISA-Urban and LISA-Sunny datasets, respectively. The proposed model (i.e., faster R-CNN with high-level feature descriptors of DConNet and VeDConNet) was able to get a higher TPR value for the LISA-Sunny dataset but a significantly close TPR value for the LISA-Urban dataset. Our model outperformed all the existing methods in terms of FDR. A very low false detection rate was obtained for all three online datasets (LISA-Dense, LISA-Sunny, and LISA-Urban) compared to the existing machine/deep learning techniques. Moreover, higher TPR values have been acquired for all three LISA datasets compared to the existing machine learning techniques.

Figure 11: TPR (%) and FDR (%) analysis of proposed experiments for self-recorded and online LISA datasets.

From Figure 11, one can see that, for the first model, the FDR is less than 4% for all datasets, making it suitable for real time applications. Further, TPR values are almost constant for all types of datasets. It shows that the model achieved a reliable result for all types of scenarios.

4.5 Discussion

The proposed approaches were successfully able to detect different types of vehicles, such as motorcycles, cars, trucks, etc. In addition, both approaches proved to be reliable for dense traffic conditions for the online LISA dataset. The fusion of pre-trained networks provided a higher accuracy for both the self-recorded and online datasets compared to the first approach in which two self designed CNNs are used. However, for the first approach, a lower frame rate was obtained compared to the second approach.

For the online datasets, both approaches obtained either a high or comparable accuracy compared to the existing state-of-the-art approaches, as given in Table 3. For LISA-Dense, the highest TPR value of 98.06% was obtained by the second proposed approach, followed by the first approach with a value of 97.87%. Further, the machine learning approaches proposed by M. Muzammel et al. (2017) Muzammel et al. (2017), R. K. Satzoda (2016) Satzoda and Trivedi (2015), and S. Sivaraman Sivaraman and Trivedi (2010) (2010) reported TPR values of 95.01%, 94.50%, and 95%, respectively. S. Roychowdhury et al. (2018) Roychowdhury and Muppirisetty (2018) did not report any results for the LISA-Dense dataset. For LISA-Urban, the highest TPR value was obtained by S. Roychowdhury et al. (2018) Roychowdhury and Muppirisetty (2018), followed by the proposed second approach. For LISA-Urban, the lowest TPR value of 91.70% was obtained by S. Sivaraman Sivaraman and Trivedi (2010).

From Figure 11, the fusion of features significantly improved the performance of faster R-CNN. A notable reduction in false detection was found for the online datasets compared to the deep learning Roychowdhury and Muppirisetty (2018) and machine learning approaches Sivaraman and Trivedi (2010); Muzammel et al. (2017); Satzoda and Trivedi (2015). A system with lower false detection rate will provide fewer false warnings and thus increase the trust of the drivers for the system. It has been found in the literature that collision warnings reduce the attention resources required for processing the target correctly Muzammel et al. (2018). In addition, collision warnings facilitate the sensory processing of the target Fort et al. (2013); Bueno et al. (2012). Finally, our fusion technique results are in line with studies in Yang et al. (2017); Muzammel et al. (2021); Mendels et al. (2017).

With regard to the comparison between both approaches, the model presented in the first approach obtained a lower FDR compared to the model presented in the second approach for the self-recorded and LISA-Urban datasets. In addition, the model presented in the first approach has a higher frame rate for all the datasets compared to the model presented in the second approach. In other TPR and FDR values, the second approach model outperformed the first approach model. Therefore, there is a slight trade off between performance and computation time.

5 Conclusions and Future Work

In this research, we propose deep neural architectures for blind-spot vehicle detection for heavy vehicles. Two different models for feature extraction are used with the faster R-CNN network. Furthermore, the high-level features obtained from both networks are fused together in order to improve the network performance. The proposed models successfully detected blind-spot vehicles with reliable accuracy using both the self-recorded and publicly available datasets. Moreover, the fusion of feature extraction networks improved the results significantly, and a notable increment in performance is observed. In addition, we compared our fusion model with the state-of-the-art benchmark, machine learning, and deep learning approaches. Our proposed work outperformed all the existing approaches for vehicle detection in various scenarios, including dense traffics, urban surroundings, with and without pedestrians, shadows, and different weather conditions. The proposed model is capable enough to be usedfor not only buses but also other heavy vehicles such as trucks, trailers, oil tankers, etc. This research work is limited to the integration of only two convolutional neural networks with faster R-CNN. In the future, more than two convolutional neural networks may be integrated with faster R-CNN, and parametric study for accuracy and frame rate may be performed.

Conceptualization, M.Z.Y.; data curation, M.M. and M.N.M.S.; formal analysis, M.A.A.; investigation, M.N.M.S.; methodology, M.M.; supervision, M.Z.Y.; validation, F.S.; visualization, M.A.A.; writing—original draft, M.M. and F.S.; writing—review and editing, M.Z.Y., M.N.M.S., F.S. and M.A.A. All authors have read and agreed to the published version of the manuscript.

This research was supported in part by Ministry of Education Malaysia under Higher Institutional Centre of Excellence (HICoE) Scheme awarded to the Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS (UTP), Malaysia; and, in part, by the Yayasan Universiti Teknologi PETRONAS (YUTP) Fund under Grant 015LC0-239.

Not applicable.

Not applicable.

Not applicable.

We express our gratitude and acknowledgment to the Centre for Intelligent Signal and Imaging Research (CISIR) and Electrical and Electronic Engineering Department, Universiti Teknologi PETRONAS (UTP), Malaysia. The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
CNN Convolutional neural network
HOG Histogram of oriented gradients
SS Selective search
YOLO You only look once
LISA Laboratory for Intelligent and Safe Automobiles
SGDM Stochastic gradient descent with momentum
TPR True positive rate
FDR False detection rate
-0cm References


  • Feng et al. (2016) Feng, S.; Li, Z.; Ci, Y.; Zhang, G. Risk factors affecting fatal bus accident severity: Their impact on different types of bus drivers. Accid. Anal. Prev. 2016, 86, 29–39. [CrossRef] [PubMed]
  • Evgenikos et al. (2016) Evgenikos, P.; Yannis, G.; Folla, K.; Bauer, R.; Machata, K.; Brandstaetter, C. Characteristics and causes of heavy goods vehicles and buses accidents in Europe. Transp. Res. Procedia 2016, 14, 2158–2167. [CrossRef]
  • Öz et al. (2010) Öz, B.; Özkan, T.; Lajunen, T. Professional and non-professional drivers’ stress reactions and risky driving. Transp. Res. Part F Traffic Psychol. Behav. 2010, 13, 32–40. [CrossRef]
  • Useche et al. (2019) Useche, S.A.; Montoro, L.; Alonso, F.; Pastor, J.C. Psychosocial work factors, job stress and strain at the wheel: validation of the copenhagen psychosocial questionnaire (COPSOQ) in professional drivers. Front. Psychol. 2019, 10, 1531. [CrossRef]
  • Craig et al. (2016) Craig, J.L.; Lowman, A.; Schneeberger, J.D.; Burnier, C.; Lesh, M. Transit Vehicle Collision Characteristics for Connected Vehicle Applications Research: 2009-2014 Analysis of Collisions Involving Transit Vehicles and Applicability of Connected Vehicle Solutions; Technical Report, United States; Joint Program Office for Intelligent Transportation Systems: Washington, DC, USA, 2016.
  • Charters et al. (2018) Charters, K.E.; Gabbe, B.J.; Mitra, B. Pedestrian traffic injury in Victoria, Australia. Injury 2018, 49, 256–260. [CrossRef]
  • Orsi et al. (2017) Orsi, C.; Montomoli, C.; Otte, D.; Morandi, A. Road accidents involving bicycles: configurations and injuries. Int. J. Inj. Control. Saf. Promot. 2017, 24, 534–543. [CrossRef] [PubMed]
  • Waseem et al. (2019) Waseem, M.; Ahmed, A.; Saeed, T.U.

    Factors affecting motorcyclists’ injury severities: An empirical assessment using random parameters logit model with heterogeneity in means and variances.

    Accid. Anal. Prev. 2019, 123, 12–19. [CrossRef] [PubMed]
  • Elimalech and Stein (2020) Elimalech, Y.; Stein, G. Safety System for a Vehicle to Detect and Warn of a Potential Collision. U.S. Patent 10,699,138, 30 June 2020.
  • Lee et al. (2018) Lee, Y.; Ansari, I.; Shim, J. Rear-approaching vehicle detection using frame similarity base on faster R-CNN. Int. J. Eng. Technol. 2018, 7, 177–180. [CrossRef]
  • Ra et al. (2018) Ra, M.; Jung, H.G.; Suhr, J.K.; Kim, W.Y. Part-based vehicle detection in side-rectilinear images for blind-spot detection. Expert Syst. Appl. 2018, 101, 116–128. [CrossRef]
  • Zhao et al. (2019) Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-based blind spot detection with a general purpose lightweight neural network. Electronics 2019, 8, 233. [CrossRef]
  • Abraham et al. (2018) Abraham, S.; Luciya Joji, T.; Yuvaraj, D. Enhancing vehicle safety with drowsiness detection and collision avoidance. Int. J. Pure Appl. Math. 2018, 120, 2295–2310.
  • Shameen et al. (2018) Shameen, Z.; Yusoff, M.Z.; Saad, M.N.M.; Malik, A.S.; Muzammel, M. Electroencephalography (EEG) based drowsiness detection for drivers: A review. ARPN J. Eng. Appl. Sci 2018, 13, 1458–1464.
  • McNeil et al. (2002) McNeil, S.; Duggins, D.; Mertz, C.; Suppe, A.; Thorpe, C. A performance specification for transit bus side collision warning system. In Proceedings of the ITS2002, 9th World Congress on Intelligent Transport Systems, Chicago, IL, USA, 14–17 October 2002.
  • Pecheux et al. (2016) Pecheux, K.K.; Strathman, J.; Kennedy, J.F. Test and Evaluation of Systems to Warn Pedestrians of Turning Buses. Transp. Res. Rec. 2016, 2539, 159–166. [CrossRef]
  • Wei et al. (2014) Wei, C.; Becic, E.; Edwards, C.; Graving, J.; Manser, M. Task analysis of transit bus drivers’ left-turn maneuver: Potential countermeasures for the reduction of collisions with pedestrians. Saf. Sci. 2014, 68, 81–88. [CrossRef]
  • Prati et al. (2018) Prati, G.; Marín Puchades, V.; De Angelis, M.; Fraboni, F.; Pietrantoni, L. Factors contributing to bicycle–motorised vehicle collisions: a systematic literature review. Transp. Rev. 2018, 38, 184–208. [CrossRef]
  • Silla et al. (2017) Silla, A.; Leden, L.; Rämä, P.; Scholliers, J.; Van Noort, M.; Bell, D. Can cyclist safety be improved with intelligent transport systems? Accid. Anal. Prev. 2017, 105, 134–145. [CrossRef]
  • Frampton and Millington (2022) Frampton, R.J.; Millington, J.E. Vulnerable Road User Protection from Heavy Goods Vehicles Using Direct and Indirect Vision Aids. Sustainability 2022, 14, 3317. [CrossRef]
  • Girbes et al. (2016) Girbes, V.; Armesto, L.; Dols, J.; Tornero, J. Haptic feedback to assist bus drivers for pedestrian safety at low speed. IEEE Trans. Haptics 2016, 9, 345–357. [CrossRef]
  • Girbés et al. (2016) Girbés, V.; Armesto, L.; Dols, J.; Tornero, J. An active safety system for low-speed bus braking assistance. IEEE Trans. Intell. Transp. Syst. 2016, 18, 377–387. [CrossRef]
  • Zhang et al. (2000) Zhang, W.B.; DeLeon, R.; Burton, F.; McLoed, B.; Chan, C.; Wang, X.; Johnson, S.; Empey, D. Develop Performance Specifications for Frontal Collision Warning System for Transit buses. In Proceedings of the 7th World Congress On Intelligent Systems, Turin, Italy, 6–9 November 2000.
  • Wisultschew et al. (2021) Wisultschew, C.; Mujica, G.; Lanza-Gutierrez, J.M.; Portilla, J. 3D-LIDAR based object detection and tracking on the edge of IoT for railway level crossing. IEEE Access 2021, 9, 35718–35729. [CrossRef]
  • Muzammel et al. (2017) Muzammel, M.; Yusoff, M.Z.; Malik, A.S.; Saad, M.N.M.; Meriaudeau, F. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures. In Proceedings of the Thirteenth International Conference on Quality Control by Artificial Vision 2017, Tokyo, Japan, 14–16 May 2017; Volume 10338, pp. 287–294.
  • Goodall et al. (2022) Goodall, N.; Ohlms, P.B. Evaluation of a Transit Bus Collision Avoidance Warning System in Virginia; Virginia Transportation Research Council (VTRC): Charlottesville, VA, USA, 2022.
  • Tseng et al. (2014) Tseng, D.C.; Hsu, C.T.; Chen, W.S. Blind-spot vehicle detection using motion and static features. Int. J. Mach. Learn. Comput. 2014, 4, 516. [CrossRef]
  • Wu et al. (2013) Wu, B.F.; Huang, H.Y.; Chen, C.J.; Chen, Y.H.; Chang, C.W.; Chen, Y.L. A vision-based blind spot warning system for daytime and nighttime driver assistance. Comput. Electr. Eng. 2013, 39, 846–862. [CrossRef]
  • Singh et al. (2014) Singh, S.; Meng, R.; Nelakuditi, S.; Tong, Y.; Wang, S. SideEye: Mobile assistant for blind spot monitoring. In Proceedings of the 2014 international conference on computing, networking and communications (ICNC), Honolulu, HI, USA, 3–6 February 2014; pp. 408–412.
  • Dooley et al. (2015) Dooley, D.; McGinley, B.; Hughes, C.; Kilmartin, L.; Jones, E.; Glavin, M. A blind-zone detection method using a rear-mounted fisheye camera with combination of vehicle detection methods. IEEE Trans. Intell. Transp. Syst. 2015, 17, 264–278. [CrossRef]
  • Ren et al. (2017) Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [CrossRef]
  • Redmon et al. (2016) Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788.

  • Redmon and Farhadi (2017) Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271.
  • Lin et al. (2017) Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988.
  • Huang et al. (2017) Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7310–7311.
  • Du et al. (2020) Du, L.; Zhang, R.; Wang, X. Overview of two-stage object detection algorithms. J. Phys. Conf. Ser. 2020, 1544, 012033. [CrossRef]
  • Theckedath and Sedamkar (2020) Theckedath, D.; Sedamkar, R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput. Sci. 2020, 1, 1–7. [CrossRef]
  • He et al. (2016) He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  • Yang et al. (2017) Yang, L.; Jiang, D.; Xia, X.; Pei, E.; Oveneke, M.C.; Sahli, H. Multimodal measurement of depression using deep learning models. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, Mountain View, CA, USA, 23 October 2017; pp. 53–59.
  • Muzammel et al. (2021) Muzammel, M.; Salam, H.; Othmani, A. End-to-end multimodal clinical depression recognition using deep neural networks: A comparative analysis. Comput. Methods Programs Biomed. 2021, 211, 106433. [CrossRef]
  • Mendels et al. (2017) Mendels, G.; Levitan, S.I.; Lee, K.Z.; Hirschberg, J. Hybrid Acoustic-Lexical Deep Learning Approach for Deception Detection. In Proceedings of the Interspeech, Stockholm, Sweden, 20–24 August 2017; pp. 1472–1476.
  • Krizhevsky et al. (2017) Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [CrossRef]
  • Karpathy et al. (2014) Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732.
  • Guo et al. (2014) Guo, X.; Singh, S.; Lee, H.; Lewis, R.L.; Wang, X. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. Adv. Neural Inf. Process. Syst. 2014, 27, 3338–3346.
  • Cui et al. (2014) Cui, Z.; Chang, H.; Shan, S.; Zhong, B.; Chen, X.

    Deep network cascade for image super-resolution.

    In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 49–64.
  • Han et al. (2018) Han, Y.; Jiang, T.; Ma, Y.; Xu, C. Pretraining convolutional neural networks for image-based vehicle classification. Adv. Multimed. 2018, 2018, 3138278. [CrossRef]
  • Sermanet et al. (2013) Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv 2013, arXiv:1312.6229.
  • Girshick et al. (2014) Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587.
  • Girshick (2015) Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448.
  • Liu et al. (2016) Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37.
  • (51) Dai, J.; Li, Y.; He, K.; Sun, J.; Fcn, R. Object Detection via Region-Based Fully Convolutional Networks. arXiv 2016, arXiv:1605.06409.
  • Chu et al. (2017) Chu, W.; Liu, Y.; Shen, C.; Cai, D.; Hua, X.S. Multi-task vehicle detection with region-of-interest voting. IEEE Trans. Image Process. 2017, 27, 432–441. [CrossRef] [PubMed]
  • Redmon and Farhadi (2018) Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767.
  • Bochkovskiy et al. (2020) Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934.
  • Jocher et al. (2020) Jocher, G.; Nishimura, K.; Mineeva, T.; Vilariño, R. yolov5. Code Repository. 2020. Available online: (accessed on 28 June 2022).
  • Lin et al. (2017) Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125.
  • Tan et al. (2020) Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790.
  • Xu et al. (2022) Xu, X.; Zhao, M.; Shi, P.; Ren, R.; He, X.; Wei, X.; Yang, H. Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN. Sensors 2022, 22, 1215. [CrossRef] [PubMed]
  • Sivaraman and Trivedi (2010) Sivaraman, S.; Trivedi, M.M.

    A general active-learning framework for on-road vehicle recognition and tracking.

    IEEE Trans. Intell. Transp. Syst. 2010, 11, 267–276. [CrossRef]
  • Muzammel et al. (2017) Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Rear-end vision-based collision detection system for motorcyclists. J. Electron. Imaging 2017, 26, 1–14. [CrossRef]
  • Roychowdhury and Muppirisetty (2018) Roychowdhury, S.; Muppirisetty, L.S. Fast proposals for image and video annotation using modified echo state networks. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1225–1230.
  • Satzoda and Trivedi (2015) Satzoda, R.K.; Trivedi, M.M. Multipart vehicle detection using symmetry-derived analysis and active learning. IEEE Trans. Intell. Transp. Syst. 2015, 17, 926–937. [CrossRef]
  • Muzammel et al. (2018) Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Event-related potential responses of motorcyclists towards rear end collision warning system. IEEE Access 2018, 6, 31609–31620. [CrossRef]
  • Fort et al. (2013) Fort, A.; Collette, B.; Bueno, M.; Deleurence, P.; Bonnard, A. Impact of totally and partially predictive alert in distracted and undistracted subjects: An event related potential study. Accid. Anal. Prev. 2013, 50, 578–586. [CrossRef]
  • Bueno et al. (2012) Bueno, M.; Fabrigoule, C.; Deleurence, P.; Ndiaye, D.; Fort, A. An electrophysiological study of the impact of a Forward Collision Warning System in a simulator driving task. Brain Res. 2012, 1470, 69–79. [CrossRef] [PubMed]