Using a motorcycle helmet can decrease the probability of fatal injuries of motorcycle riders in road traffic crashes by 42%(Liu et al., 2004) which is why governments worldwide have enacted laws that make helmet use mandatory. Despite this, compliance with motorcycle helmet laws is often low, especially in developing countries (Bachani et al., 2013, 2017; Siebert et al., 2019)
. To efficiently conduct targeted helmet use campaigns, it is essential for governments to collect detailed data on the level of compliance with helmet laws. However, 40% of countries in the world do not have an estimate of this crucial road safety metric(World Health Organization, 2015). And even if data is available, helmet use observations are frequently limited in sample size and regional scope (Fong et al., 2015; Ledesma et al., 2015), draw from data of relatively short time frames (Karuppanagounder and Vijayan, 2016; Xuequn et al., 2011), or are only singularly collected in the scope of academic research (Siebert et al., 2019; Oxley et al., 2018). The main reason for this lack of comprehensive continuous data lies in the prevailing way of helmet use data collection, which utilizes direct observation of motorcycle helmet use in traffic by human observers. This direct observation during road-side surveys is resource intensive, as it is highly time-consuming and costly (Eby, 2011). And while the use of video cameras allows indirect observation, alleviating the time pressure of direct observation, the classification of helmet use through human observers limits the amount of data that can be processed.
In light of this, there is an increasing demand to develop a reliable and timely efficient intelligent system for detecting helmet use of motorcycle riders that does not rely on a human observer. A promising method for achieving this automated detection of motorcycle helmet use is machine learning. Machine learning has been applied to a number of road safety related detection tasks, and has achieved high accuracy for the general detection of pedestrians, bicyclists, motorcyclists and carsDalal and Triggs (2005). While first implementations of automated motorcycle helmet use detection have been promising, they have not been developed to their full potential. Current approaches rely on data sets that are limited in the overall number of riders observed, are trained on a small number of observation sites, or do not detect the rider position on the motorcycle (Dahiya et al., 2016; Vishnu et al., 2017)
. In this paper a deep learning based automated helmet use detection is proposed that relies on a comprehensive dataset with large variance in the number of riders observed, drawing from multiple observation sites at varying times of day.
Recent successful deep learning based applications of computer vision, e.g. in image classification(He et al., 2016; Simonyan and Zisserman, 2014; Szegedy et al., 2016), object detection (Lin et al., 2018; He et al., 2017), and activity recognition (Pigou et al., 2018; Donahue et al., 2015) have heavily relied on large-scale datasets, similar to the one used in this study. Hence, the next section of this paper will focus on the generation of the underlying dataset and its annotation, to facilitate potential data collection in other countries with a similar methodology. This is followed by a section on algorithm training. In the subsequent sections of this paper, the algorithm performance is analyzed through comparison with an annotated test data set and with results from an earlier observational study on helmet use in Myanmar, conducted by human observers (Siebert et al., 2019).
2 Dataset creation and annotation
2.1 Data collection and preprocessing
Myanmar was chosen as the basis for the collection of the source material for the development of the algorithm, since its road user aggregate and rapid motorization are highly representative of developing countries in the world (World Health Organization, 2017) and video recordings of traffic were available from an earlier study (Siebert et al., 2019). Motorcyclists comprise more than 80% of road users in Myanmar (World Health Organization, 2015), and the number of motorcycles has been increasing rapidly in recent years (Wegman et al., 2017).
Throughout Myanmar, traffic was filmed with two video-cameras with a resolution of pixels and a frame rate of 10 frames per second. Within seven cities, cameras were placed at multiple observation sites at approximately 2.5 m height and traffic was recorded for two consecutive days from approximately 6 am to 6:30 pm (Table 1). As the city of Mandalay has the highest number of motorcyclists in Myanmar the two cameras were installed for 7 days here. Yangon, the largest city of Myanmar, has an active ban on motorcycle in the city center, hence, one camera was placed in the suburbs here. Due to technical problems with the cameras and problems in reaching the selected observation sites, the number of hours recorded was not the same for each observation site. After the removal of blurred videos due to cloudy weather or rain, 254 hours of video data were available as the source material for this study. Video data was divided into 10 second video clips (100 frames each), which formed the basis for training, validating, and testing the algorithm in this study. The duration of video data available at each observation site is shown in Table 1. The observation sites represent a highly diverse data set, including multilane high traffic density road environments (e.g. Mandalay) as well as more rural environments (e.g. Pathein). Still frames of observation sites are presented in Figure 1.
2.2 Sampling video clips
Since there were insufficient resources to annotate all 254 hours of recorded traffic, 1,000 videos clips were sampled which were most representative of the source material. After segmenting the source material into non-overlapping video clips of 10 seconds length (100 frames), we applied the object detection algorithm YOLO9000 (Redmon and Farhadi, 2017) with the pre-trained weights to detect the number of motorcycles in each frame, extracting those clips with the highest number of motorcycles in them. Multiple clips were sampled from each observation site, in proportion to the available videodata from each site. The resulting distribution of the 1,000 sampled video clips is presented in Table 1. The observation site Pathein_urban (47 video clips) was retroactively excluded from analysis due to heavy fogging on the camera which was not detected during the initial screening of the video data (Section 2.1). In addition, 43 video clips were excluded since they did not contain active motorcycles, as the YOLO9000 algorithm (Redmon and Farhadi, 2017) had identified parked motorcycles.
Videodata was annotated by first drawing a rectangular box around an individual motorcycle and its riders (so called bounding box), before entering information on the number of riders, their helmet use and position. All bounding boxes containing an individual motorcycle throughout a number of frames form the motorcycle track, i.e. an individual motorcycle will appear in multiple frames, but will only have one motorcycle track. To facilitate the annotation of the videos, we tested and compared the three image and video annotation tools BeaverDam (Shen, 2016), LabelMe (Russell et al., 2008), and VATIC (Vondrick et al., 2013). We chose BeaverDam for data annotation, since it allows frame-by-frame labeling in videos, is easy to install, and has superior usability. Annotation was conducted by two freelance workers. An example of the annotation of an individual motorcycle through multiple frames (motorcycle track) is presented in Fig. 2.
For each bounding box, workers encoded the number of riders (1 to 5), their helmet use (yes/no) and position (Driver (D), Passenger (P0-3); Fig. 3). Examples of rider encoding are displayed in Fig. 3.
|Site ID||Training set||Validation set||Test set||Overall|
2.4 Composition of annotated data
The 910 annotated video clips were randomly divided into three non-overlapping subsets: a training set (70%), a validation set (10%), and a test set (20%) (Table 2). Data on the number of annotated motorcycles in all 910 video clips can be found in Table 3. Overall, 10,180 motorcycle tracks (i.e. individual motorcycles) were annotated. As each individual motorcycle appears in multiple frames, there are 339,784 annotated motorcycles on a frame level, i.e. there are 339,784 bounding boxes containing motorcycles in the dataset. All motorcycles were encoded in classes, depending on the position and helmet use of the riders. This resulted in 36 classes, shown in Table 3. The number of motorcycles per class was imbalanced and ranged from only 12 observed frames (e.g., for motorcycles with 5 riders with no rider wearing a helmet) to 140,886 observed frames (one driver wearing a helmet). Some classes were not observed in the annotated video clips, e.g., there was no motorcycle with 4 riders all wearing a helmet.
3 Helmet use detection algorithm
After the creation of the dataset was finished, we applied a state-of-the-art object detection algorithm to the annotated data, to facilitate motorcycle helmet use detection on a frame-level. In this process, data from the training set is used to train the object detection algorithm. In the process of training, the validation set is used to find the best generalizing model, before the algorithm’s accuracy in predicting helmet use is tested on data that the algorithm has not seen before, the so-called test set. The composition of the three sets is presented in Table 2
. Generally, the state-of-the-art object detection algorithms can be divided into two types: two-stage and single-stage approaches. The two-stage approaches first identify a number of potential locations within an image, where objects could be located. In a second step, an object classifier (using a convolutional neural network) is used to identify objects a these locations. While two-stage approaches such as Fast R-CNNGirshick (2015), achieve a higher accuracy than single-stage approaches, they are very time-consuming. In contrast, single-stage approaches simultaneously conduct object location and object identification. Single stage approaches like YOLO Redmon and Farhadi (2017) and RetinaNet Lin et al. (2018) therefore are much faster than two-stage approaches, although there is a small trade-off in accuracy. In this paper, we used RetinaNet Lin et al. (2018) for our helmet use detection task. While it is a single-stage approach, it uses a multi-scale feature pyramid and focal loss to address the general limitation of one-stage detectors in accuracy. Figure 4 illustrates the framework of RetinaNet.
Since the task of detecting motorcycle riders’ helmet use is a classic object detection task, we fine-tuned RetinaNet instead of training it from scratch. I.e. we use a RetinaNet model111https://github.com/fizyr/keras-retinanet which is already trained for general object detection and fine tune it to specifically detect motorcycles, riders, and helmets.
In our experiments, we used ResNet50 (He et al., 2016)
as the backbone net, initialized with pre-trained weights from ImageNetDeng et al. (2009). The backbone net provides the specific architecture for the convolutional neural network. In the learning process, we used the Adam optimizer Kingma and Ba (2014) with a learning rate of and a batch size of 4 and stopped training when the weighted mean Average Precision (weighted mAP, explained in the following) on the validation set stopped improving with a patience of 2. To assess the accuracy of our algorithm, we use the Average Precision (AP) value Everingham et al. (2010). The AP integrates multiple variables to produce a measure for the accuracy of an algorithm in an object detection task, including intersection over union, precision, and recall. The intersection over union describes the positional relation between algorithm generated and human annotated bounding boxes. Algorithm generated bounding boxes need to overlap with human annotated bounding boxes by at least 50%, otherwise they are registered as an incorrect detection. The precision presents the number of correct detections of all detections made by the algorithm (). The recall variable measures how many of the available correct instances were detected by the algorithm (). For a more in-depth explanation of AP please see Everingham et al. (2010) and Salton and McGill (1983). Since the number of frames per class was very imbalanced in our dataset (Table 3), the final performance for all classes is computed as weighted average of AP for each class, defined as:
where weights across all classes will sum to one, and set to be proportional to the number of instances. Fig. 5 shows the training loss, validation loss, and weighted mAP
in the training and validation sets in the learning process. It can be observed that training loss is constantly decreasing, i.e. the prediction error is getting smaller, while the deep model learns useful knowledge for the helmet use detection from the training set. Consequently, the weighted mAP of the training set is constantly increasing. At the same time, the validation loss, i.e. the prediction error on the validation set is getting smaller in the first 9 epochs. Correspondingly, the mAP on thevalidation set is increasing in the first few epochs before it stops to improve after 9 epochs, which means the algorithm starts to overfit on the training set. Therefore, we stopped training and selected the optimal model after 9 epochs, obtaining 72.8% weighted mAP on the validation set.
In the following, we report the helmet use detection results of the algorithm on the test set, using the optimal model developed on the validation set (where it obtained 72.8% weighted mAP).
We achieved 72.3% weighted mAP on the test set, with a processing speed of 14 frames/second. The AP for each class is shown in Table 3. It can be observed that RetinaNet worked well on common classes but not on under-represented classes due to the small number of training instances. Considering only common classes (up to two riders), our trained RetinaNet achieved 76.4% weighted mAP. This is a very good performance considering a lot of factors such as occlusion, camera angle, and diverse observation sites. Detection results on some sample frames are displayed in Fig. 6. Due to the imbalanced classes, there are some missing detections, e.g., Fig. 6 (a), (g) and (h). Example videos, consisting of algorithm annotated frames of the test set can be found in the supplementary material.
|Position||Motorcycle||Frame level||Helmet use detection|
|10,180||240,013||35,182||64,589||339,784||weighted mAP: 72.3|
rider in corresponding position wears a helmet
rider in corresponding position does not wear a helmet
there is no rider in corresponding position
4 Comparison to human observation in real world application
Since the video data that forms the basis for the training of the machine learning algorithm in this paper has been analyzed in the past to assess motorcycle helmet use, there is a unique opportunity to compare hand-counted helmet use numbers in the video data with the calculated helmet use numbers generated by the algorithm developed in this paper. Siebert et al. (Siebert et al., 2019) hand-counted the motorcyclists with and without helmets in the source video data for the first 15 minutes of every hour that a video was recorded. Hence, ”hourly” helmet use percentages for every individual observation site in the data set are available. To assess the feasibility of our machine learning approach for real-world observation studies, we compare hourly hand-counted helmet use rates from the Siebert et al. study with hourly computer-counted rates estimated through the application of our algorithm.
It is important to understand the fundamental difference of the hand-counting method used by Siebert et al. Siebert et al. (2019) and the frame-based algorithmic approach presented in this paper. In the initial observation of the video data by Siebert et al., a human observer screened 15 minute video sections and registered the number of helmet- and non-helmet-wearing motorcycle riders for individual motorcycles. I.e. helmet use on a motorcycle was only registered once, even though an individual motorcycle was present in multiple frames, driving through the field of view of the camera. The occlusion of an individual motorcycle in some of these frames, e.g. when a motorcycles passed a bus that was located between the motorcycle and the camera, does not pose a problem for the detection of helmet use on that individual motorcycle, as the human observer has the possibility to jump back and forth in the video and register helmet use in a frame with a clear unoccluded view of the motorcycle. Furthermore, a human observer naturally tracks a motorcycle and can easily identify a frame where the riders of the motorcycle and their helmet use is most clearly visible, e.g. when the motorcycle is closest to the camera. The human observer can then use this frame to arrive at a conclusion on the number of riders and their helmet use.
In contrast, the computer vision approach developed in this paper will register motorcycle riders’ helmet use in each frame where a motorcycle is detected. This can introduce some error-variance in helmet use detection. The speed of a motorcycle will influence how many times helmet use for an individual motorcycle is registered, as slower motorcycle riders will appear in more frames than faster ones. Furthermore, occlusion influences how many times a motorcycle will be registered, which can influence the overall helmet use average calculated. Also, helmet use will be registered for motorcycles that are in a sub-optimal angle to the camera, e.g. on motorcycles that drive directly towards the camera, drivers can occlude passengers behind them. However, not all of these differences have a direct impact on helmet use calculated through the algorithm. We assume that occlusion does not introduce a directed bias to detected helmet use rates, as riders with and without helmets have the same chance to be occluded by other traffic. The same can be assumed for differences in motorcycle speed within the observed cities, as riders with helmets won’t be faster or slower than those without helmets. We therefore assume that a frame based helmet use registration will lead to comparable results to helmet use registered by a human observer.
Since the algorithm has been trained on specific observation sites, it can be considered to be observation site trained. I.e. when the algorithm is applied to the observation site Bago_rural, there is data on this specific observation site in the training set (Table 2). In an application of the deep learning based approach, this might not be the case, as the algorithm will not have been trained on new observation sites. Hence, in the following, we also compare algorithmic accuracy for an observation site untrained algorithm. For this, we exclude all training data from the observation site that we analyze, before training the algorithm, simulating the application of the algorithm to a new observation site. In the following, trained algorithm refers to the algorithm with training on an observation site to be analyzed, while untrained algorithm refers to an algorithm that was not trained on an observation site to be analyzed.
Data on hourly helmet use rates for one randomly chosen day of video data from each observation site is presented in Fig. 7. Helmet use was either registered by a human observer (Siebert et al., 2019), registered through the trained algorithm, or the untrained algorithm. It can be observed that hourly helmet use percentages are relatively similar when comparing human and computer registered rates of the trained algorithm. The trained algorithm registers accurate helmet use rates, even when large hourly fluctuations in helmet use are present, e.g. at the Mandalay_1 observation site (Figure 7(b)). However, some of the observed 15 minute videos show a large discrepancy between helmet use rates registered by a human and the trained algorithm. While it is not possible to conduct a detailed error analysis (as we did in Section 3.3) it is possible to evaluate the video data for broad factors that could increase the discrepancy between human registered data and the data registered by the trained algorithm.
As an example, helmet use rates at the Bago_rural observation site at 9 am have a much higher helmet use rate registered by the trained algorithm than by the human observer (Figure 7(a)). A look at the video data from this time frame reveals heavy rain at the observation site (Fig. 8). Apart from an increased blurriness of frames due to a decrease in lighting and visible fogging on the inside of the camera case, motorcycle riders can be observed to use umbrellas to protect themselves against the rain. It can be assumed that motorcycle riders without helmets are more likely to use an umbrella, as they are not protected from the rain by a helmet. This could explain the higher helmet use registered by the trained algorithm at this observation site and time, as non-helmeted riders are less likely to be detected due to umbrellas.
Another instance of a large discrepancy between human and computer registered helmet rates through the trained algorithm can be observed for 6 am at the Pathein_rural observation site (Figure 7(f)). A look at the video data reveals bad lighting conditions due to a combination of clouded weather and the early observation time. This results in unclear motorcycles, which are blurred due to their driving speed in combination with the bad lighting conditions (Fig. 9).
Despite singular discrepancies between hourly helmet use rates coded by a human and the trained algorithm, the overall accuracy of average helmet use rates calculated by the trained algorithm per observation site is high. A comparison of average helmet use per observation site, registered by a human observer and the trained algorithm is presented in (Figure 10). For three observation sites (Naypitaw, Nyaung-U, and Pakokku), trained algorithm registered helmet use rates deviate by a maximum of 1% from human registered rates. For the other four observation sites (Bago, Mandalay, Pathein, and Yangon), trained algorithm registered rates are still reasonably accurate, varying between -4.4% and +2.07%.
For the untrained algorithm, it can be observed that the registered hourly helmet use data is less accurate than that of the trained algorithm, while it is still relatively close to the human registered data for most observation sites (Figure 7). The effects of decreased visibility at the Bago_rural and Pathein_rural sites are also present. However, at the Yangon_II observation site, the registered helmet use of the untrained algorithm is notably higher than helmet use registered by the trained algorithm and the human observer, registering more than double the helmet use present at the observation site. A comparison of the frame-level helmet use detection at the Yangon_II site between the trained and untrained algorithm revealed a large number of missed detections of the untrained algorithm (Figure 11). Excluding Yangon_II, the helmet use rates registered through the untrained algorithm vary between -8.13% and +9.43% from human registered helmet use (Figure 10).
In this paper, we set out to develop a deep learning based approach to detect motorcycle helmet use. Using a large number of video frames we trained an algorithm to detect active motorcycles, the number and position of riders, as well as their helmet use. The use of an annotated test data set allowed us to evaluate the accuracy of our algorithm in detail (Section 3.3, Table 3). The algorithm had high accuracy for the general detection of motorcycles. Further, it was capable of accurately identifying the number of riders and their position on the motorcycle. The algorithm was less accurate however, for motorcycles with a large number of riders or for motorcycles with an uncommon rider composition (Table 3). Based on these results, the present version of the algorithm can be expected to generate highly accurate results in countries, where only two riders are allowed on a motorcycle and where riders adherence to this law is high. Our implementation of the algorithm can run on consumer hardware with a speed of 14 frames per second, which is higher than the frame rate of the recorded video data. Hence, the algorithm can be implemented to produce real-time helmet use data at any given observation site.
Our comparison of algorithm accuracy with helmet use registered by a human observer (Section 4) revealed an overall high average accuracy, if the algorithm had been trained on the specific observation sites (Figure 10). If there was no prior training on the specific observation site, the (untrained) algorithm had an overall lower accuracy in helmet use detection. There was a large deviation of registered helmet use at the Yangon_II observation site, where a large number of missed detections resulted in a highly inaccurate detection performance. The lack of training data with a camera angle similar to the Yangon_II observation site is the most likely cause for this low detection accuracy. Potential ways to counteract this performance decrement are discussed in the Section 6.
A comparison of hourly helmet use rates revealed a small number of discrepancies between human and algorithm registered rates (Fig. 7). Further analysis revealed a temporary decrease in the video source material quality as the reason for these discrepancies (Fig. 8 & 9). This decrease in detection accuracy has to be seen in light of the training of the algorithm, in which periods with motion blur due to bad lighting or bad weather were excluded. Hence, decrements in detection accuracy are not necessarily the result of differences in observation sites themselves.
6 Conclusion and future work
The lack of representative motorcycle helmet use data is a serious global concern for governments and road safety actors. Automated helmet use detection for motorcycle riders is a promising approach to efficiently collect large, up-to-date data on this crucial measure. When trained, the algorithm presented in this paper can be directly implemented in existing road traffic surveillance infrastructure to produce real-time helmet use data. Our evaluation of the algorithm confirms a high accuracy of helmet use data, that only deviates by a small margin from comparable data collected by human observers. Observation site specific training of the algorithm does not involve extensive data annotation, as already the annotation of 270 s of video data is enough to produce accurate results for e.g. the Yangon_II observation site. While the sole collection of data does not increase road safety by itself (Hyder, 2019), it is a prerequisite for targeted enforcement and education campaigns, which can lower the rate of injuries and fatalities (World Health Organization, 2006).
For future work we propose three ways in which the software-side performance of machine learning based motorcycle helmet use detection can be improved. First, there is a need to collect more data in under-represented classes (Table 3) to increase rider, position, and helmet detection accuracy for motorcycles with more than two riders. Second, diverse video data should be collected in regards to the camera angle. This would prevent detection inaccuracies caused by missed detections in camera setups with unusual camera angles. Third, it appears promising to add a simple tracking method for motorcycles to the existing approach. Tracking would allow the identification of individual motorcycles within a number of subsequent frames. Using a frame based quality assessment of an individual motorcycle’s frames together with tracking, would allow the algorithm to choose the most suitable frame for helmet use and rider position detection, which will improve overall detection accuracy. Tracking would further allow the algorithm to register the number of individual motorcycles passing an observation site, providing valuable information on traffic flows and density.
On the hardware-side, future applications of the algorithm can greatly benefit from an improved camera system, that is less influenced by low light conditions (Fig. 9) and less susceptible to fogging and blur due to rain on the camera lens (Fig. 8). An increase of the resolution of the video data could allow the detection of additional measures, such as helmet type or chin-strap usage. Apart from a generally increased performance through software and hardware changes, future applications of the developed method could incorporate a more comprehensive set of variables. Within the deep learning approach, the detection of e.g. age categories, chin-strap use, helmet type, or mobile phone use would be possible.
There are a number of limitations to this study. Algorithmic accuracy was only analyzed for road environments within Myanmar, limiting the type of motorcycles and helmets present in the training set. Future studies will need to assess whether the algorithm can maintain the overall high accuracy in road environments in other countries. A similar limitation can be seen in the position of the observation camera. While the algorithm is able to detect motorcycles from a broad range of angles due to diverse training data, there was no observation site where the observation camera was installed in an overhead position, filming traffic from above. Since traffic surveillance infrastructure is often installed at this position, future studies will need to assess whether the algorithm would produce accurate results from an overhead angle. This is especially important in light of the results of the Yangon_II observation site, where an unusual camera angle lead to a large number of missed detections. Furthermore, a more structured variation of camera to lane angle would help to better understand optimal positioning of observation equipment for maximum detection accuracy. While it was included in the data annotation process, the algorithmic accuracy in detecting the position of riders was not compared to human registered data in this study. In light of large differences of motorcycle helmet use for different rider positions (Siebert et al., 2019), future studies will need to incorporate deeper analysis of position detection accuracy. For the comparison of human- and machine-registered helmet use rates, it appears promising to enable a detailed error analysis (false positive/ false negative) through the use of an adapted data structure of human helmet use registration.
In conclusion, we are confident that automated helmet use detection can solve the challenges of costly and time-consuming data collection by human observers. We believe that the algorithm can facilitate broad helmet use data collection and encourage its active use by actors in the road safety field.
This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161 (Project A05).
- Trends in prevalence, knowledge, attitudes, and practices of helmet use in cambodia: results from a two year study. Injury 44, pp. S31–S37. Cited by: §1.
- Helmet wearing in kenya: prevalence, knowledge, attitude, practice and implications. Public health 144, pp. S23–S31. Cited by: §1.
- Keras. Note: https://keras.io Cited by: §3.2.
- Automatic detection of bike-riders without helmet using surveillance videos in real-time. In International Joint Conference on Neural Networks (IJCNN), pp. 3046–3051. Cited by: §1.
Histograms of oriented gradients for human detection.
IEEE conference on Computer vision and Pattern Recognition (CVPR), Vol. 1, pp. 886–893. Cited by: §1.
- ImageNet: a large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. Cited by: §3.2.
- Long-term recurrent convolutional networks for visual recognition and description. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634. Cited by: §1.
- Naturalistic observational field techniques for traffic psychology research. In Handbook of traffic psychology, pp. 61–72. Cited by: §1.
- The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88 (2), pp. 303–338. Cited by: §3.2.
- Rates of motorcycle helmet use and reasons for non-use among adults and children in luang prabang, lao people’s democratic republic. BMC public health 15 (1), pp. 970. Cited by: §1.
- Fast R-CNN. In IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. Cited by: §3.1.
- Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. Cited by: §1.
- Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §1, §3.2.
- Measurement is not enough for global road safety: implementation is key. The Lancet Public Health 4 (1), pp. e12–e13. Cited by: §6.
- Motorcycle helmet use in calicut, india: user behaviors, attitudes, and perceptions. Traffic injury prevention 17 (3), pp. 292–296. Cited by: §1.
- Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
- Motorcycle helmet use in mar del plata, argentina: prevalence and associated factors. International journal of injury control and safety promotion 22 (2), pp. 172–176. Cited by: §1.
- Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §1, Figure 4, §3.1.
- Helmets for preventing injury in motorcycle riders. Cochrane database of systematic reviews 4, pp. 1–42. Cited by: §1.
- An observational study of restraint and helmet wearing behaviour in malaysia. Transportation research part F: traffic psychology and behaviour 56, pp. 176–184. Cited by: §1.
- Beyond temporal pooling: recurrence and temporal convolutions for gesture recognition in video. International Journal of Computer Vision 126 (2-4), pp. 430–439. Cited by: §1.
- YOLO9000: better, faster, stronger. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Cited by: §2.2, §3.1.
- LabelMe: a database and web-based tool for image annotation. International Journal of Computer Vision 77 (1-3), pp. 157–173. Cited by: §2.3.
- Introduction to modern information retrieval. McGraw Hill Book Co.. Cited by: §3.2.
- BeaverDam: video annotation tool for computer vision training labels. Master’s Thesis, EECS Department, University of California, Berkeley. External Links: Cited by: §2.3.
- Patterns of motorcycle helmet use – a naturalistic observation study in myanmar. Accident Analysis & Prevention 124, pp. 146–150. Cited by: §1, §1, §2.1, §4.1, §4.2, §4, §6.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1.
- Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826. Cited by: §1.
- Detection of motorcyclists without helmet in videos using convolutional neural network. In International Joint Conference on Neural Networks (IJCNN), pp. 3036–3041. Cited by: §1.
- Efficiently scaling up crowdsourced video annotation. International Journal of Computer Vision 101 (1), pp. 184–204. Cited by: §2.3.
- Road safety in myanmar. recommendations of an expert mission invited by the government of myanmar and supported by the suu foundation.. Paris, FIA. Cited by: §2.1.
- Helmets: a road safety manual for decision-makers and practitioners. Geneva: World Health Organization. Cited by: §6.
- Global status report on road safety 2015. World Health Organization. Cited by: §1, §2.1.
- Powered two-and three-wheeler safety: a road safety manual for decision-makers and practitioners. World Health Organization. Cited by: §2.1.
- Prevalence rates of helmet use among motorcycle riders in a developed region in China. Accident Analysis & Prevention 43 (1), pp. 214–219. Cited by: §1.