Detecting Traffic Lights by Single Shot Detection

05/07/2018
by   Julian Müller, et al.
Universität Ulm
0

Recent improvements in object detection are driven by the success of convolutional neural networks (CNN). They are able to learn rich features outperforming hand-crafted features. So far, research in traffic light detection mainly focused on hand-crafted features, such as color, shape or brightness of the traffic light bulb. This paper presents a deep learning approach for accurate traffic light detection in adapting a single shot detection (SSD) approach. SSD performs object proposals creation and classification using a single CNN. The original SSD struggles in detecting very small objects, which is essential for traffic light detection. By our adaptations it is possible to detect objects much smaller than ten pixels without increasing the input image size. We present an extensive evaluation on the DriveU Traffic Light Dataset (DTLD). We reach both, high accuracy and low false positive rates. The trained model is real-time capable with ten frames per second on a Nvidia Titan Xp.

READ FULL TEXT VIEW PDF

Authors

page 1

page 4

page 6

page 8

06/26/2017

Detecting Small Signs from Large Images

In the past decade, Convolutional Neural Networks (CNNs) have been demon...
05/12/2019

Object Detection in Specific Traffic Scenes using YOLOv2

object detection framework plays crucial role in autonomous driving. In ...
12/03/2020

Traffic Surveillance using Vehicle License Plate Detection and Recognition in Bangladesh

Computer vision coupled with Deep Learning (DL) techniques bring out a s...
01/05/2016

Crater Detection via Convolutional Neural Networks

Craters are among the most studied geomorphic features in the Solar Syst...
08/27/2021

Densely-Populated Traffic Detection using YOLOv5 and Non-Maximum Suppression Ensembling

Vehicular object detection is the heart of any intelligent traffic syste...
06/25/2021

Circumpapillary OCT-Focused Hybrid Learning for Glaucoma Grading Using Tailored Prototypical Neural Networks

Glaucoma is one of the leading causes of blindness worldwide and Optical...
05/29/2018

Getting to Know Low-light Images with The Exclusively Dark Dataset

Low-light is an inescapable element of our daily surroundings that great...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Traffic light detection is a key problem for autonomous driving. The basis publication in traffic light detection forms Lindner et al[18], using color and shape for proposal generation. This conventional proceed - creating proposals (or also called candidates) by hand-selected features and a subsequent verification/classification - characterized publications in the following decade.
A main drawback of separating the object proposal generation step and classification is runtime and accuracy. Popular region proposal algorithms are not real-time capable [29, 33] often requiring several seconds per image. In addition, hand-crafted features for traffic light detection are not able to reach sufficient accuracy [12] saturating clearly below 100 percent.
With rising success of CNNs, also object proposal generation was performed by sharing a base network together with classification [25, 19]. Typically, those networks are trained by a confidence and localization loss [10], guaranteeing accurate bounding boxes with respect to the overlap metric intersection over union. One key problem of those approaches is the detection of small objects. It is mainly caused by pooling operations, which increase the receptive field and reduce the computational effort. However, pooling also decreases the image resolution leading to difficulties for accurate localization of small objects.

Fig. 1: Exemplary results of our SSD approach for traffic light detection. Our approach is able to even detect objects smaller or equal five pixels in width and moreover predicts their state by an additional branch.

In this paper, we present the TL-SSD, an adaption of the original Single Shot Detector (SSD) [19], trained on the large-scale DriveU Traffic Light Dataset [14]. We make the following contributions:

  1. We prove that the original approach struggles in detecting small objects in later feature layers due to a prior box stride based on the layer size. We demonstrate a possibility how to detect small objects in later layers without subsampling the layer itself.

  2. We replace the original base network by an Inception network [28], which is faster and more accurate

  3. We extend the approach for state (color) prediction by adding a convolutional layer and extending loss calculation

  4. We adapt the class-wise non maximum suppression to a class-independent one to avoid multi-detections.

  5. We present an extensive evaluation on a large-scale dataset with very promising results

We organize our paper as follows: The next section briefly describes the state-of-the-art. We differentiate classical traffic lights detection methods from CNN-based methods. The TL-SSD explaining the original approach and all modifications is presented in Section III. Section IV presents extensive experiments on the DTLD [14]. The paper closes with a conclusion in Section V.

pooling layer

concatenation layer

convolutional layer

NMS

inception_c

inception_b

inception_a

reduction_b

reduction_a
Fig. 2: Single Shot Detection with an Inception_v3 base network. The input image is of size 2048x512 pixels. There are five convolutional and two pooling layers in the beginning. Three Inception_A layers follow, which consist of several convolutional layers with filters of different size. Reduction layers reduce the layer size. Afterwards, four Inception_B layers, a reduction layer and two Inception_C layers follow. Prior boxes are created on top of several convolutional layers. Bounding boxes are predicted from additional convolutional layers, which predict offsets with respect to the prior boxes. Confidence and state are predicted from another convolutional layer. A non-maximum suppression is performed on the resulting predictions.

Ii Related Work

We separate the related work in three different parts. First of all, approaches generating object proposals from ConvNets are presented. Afterwards we briefly present publications on traffic light detection and approaches using CNNs in detail.
Object proposals from CNNs: OverFeat [26] detects bounding box coordinates from a fully-connected layer. A more general approach is MultiBox [10] predicting bounding boxes for multi-class tasks from a fully-connected layer. YOLO [23] predicts bounding boxes from a single layer, whereas classification and proposal generation share the same base network. SSD [19] enhances this approach by predicting bounding boxes from different layers. Faster R-CNN [25] simultaneously predicts object bounds and objectness scores at each position.
Classical traffic light detection: There are various publications on traffic light detection from color [18, 16, 7, 5, 15]. Mostly, they use a simple thresholding in different color spaces (RGB, HSV). A popular approach from de Charette et al[6] uses the white top-hat operator to extract blobs from grayscale images. Other methods use the shape of the traffic light bulb for proposal generation [18, 22, 30]. Stereo vision is used as an additional source in [13]. Other publications also use a priori information from maps [11, 17, 2] in their recognition system. Multi-camera systems are described in [21] and [1].
Traffic light detection by CNNs: As the first method for detecting traffic lights from CNNs the DeepTLR [31] was presented in 2016. Their net returns a pixel-segmented image on which they apply a bounding box regression for each class. Bach et al[1] also use a CNN segmentation for object detection in a multi-camera setup. Behrendt et al[3] present a complete detection, tracking and classification system based on CNNs and deep neural networks.

Iii Tl-Ssd

This chapter presents the TL-SSD, a modified single shot detector for traffic light detection. Section III-A presents the basic principle of SSD. Section III-B explains why we used Inception instead of VGG as a base network. Section III-C discusses and analyzes the problem of SSD for small objects. It shows an adaption to also detect small objects without increasing the input image size. Section III-D analyzes the receptive field of the net and clarifies, in which layers which objects can be detected. Section III-E briefly explains how to adapt the non-maximum suppression to avoid multiple detections on a single object. Section III-F deals with modifications made for state prediction.

Iii-a Basic Principle

Figure 2

shows an illustration of the SSD architecture used for prediction of bounding boxes. Single shot detection is based on a common CNN for feature extraction. In addition, one or several convolutional layers are set up on existing feature maps and predict a fixed number of bounding boxes and a corresponding confidence score. The predicted boxes are offsets with respect to predefined and fixed prior boxes. Prior boxes are distributed over a feature map and located in the center of each cell (see Figure 

2(a)).
Training: SSD optimizes both, localization and confidence loss

as a weighted sum leading to the loss function

(1)

for matched prior boxes. The localization loss is a regression to offsets between prior boxes and prediction. Position (center) as well as size (width, height) is included. The confidence loss is calculated as a softmax with cross-entropy loss. Details are given in [19].

(a) Original Priors
(b) Modified Priors
(c) stride error
Fig. 3: Comparison of original (a) and modified priors (b). The original priors are placed in the center of each feature cell. In order to also detect smaller objects, the stride has to be smaller. Therefore we adapt the priors for arbitrary positions in the feature cell.

Iii-B Inception-v3 instead of VGG

Original SSD uses the well-established VGG-16 as the base network. However, traffic light recognition is a task which requires both, high accuracy and real-time capability. As shown in [4], there exist networks with a better accuracy vs speed trade-off than VGG. We decided to use Inception-v3 [28]

as one of the base networks with the highest top-1 accuracies at moderate speed. Further benefits are the Inception modules (Figure 

2), which concatenate different receptive fields (see Table I). Thereby, context information and local information are combined, which is needed to determine the traffic light state (local information) and the existence of a traffic light (context information, see Section III-D).

Iii-C The Dilemma of Small Objects

The design of convolutional neural networks tends to rising depth with more convolutional layers as later layers are known to generate richer features than early layers [27]. During training of SSD, predefined prior boxes are matched with the ground truth objects. Whereas the size(s) and aspect ratio(s) can be chosen arbitrarily, the stride of each default box is defined by the size of the feature layer. Original SSD places the center of a prior box at , where is the size of the chosen feature layer and and are the feature layer coordinates, respectively. Table I illustrates the sizes as well as the corresponding prior box stride of important convolutional layers of the Inception_v3 net, which we use for our main experiments. In late inception layers, strides of 8, 16 or 32 pixels with regard to the input image are set. Those high strides lead to a high risk of missing traffic light objects. We analyze the maximum allowed step with respect to the smallest to be detected object in the following.
Stride with respect to object size: Figure 2(c) illustrates a traffic light ground truth circumscribed by an ideal bounding box O. The black hypothesis H shows a not perfectly aligned detection due to a higher bounding box stride. We denote the positioning error as . To express the effect of a too high step size on the detection accuracy, we use the metric intersection over union defined as

(2)

It expresses the overlap between a ground truth and detection bounding box. A common threshold counting an object as detected is . Our goal is to determine the allowed stride with respect to the IoU we want to reach. The IoU with respect to the stride error can be derived according to Figure 2(c) as

(3)
layer height width ratio stride receptive field (min/max)
Input 512 2048 1 1 1x1
conv_1 255 1023 2 3x3
conv_2 253 1021 2 7x7
conv_3 253 1021 2 11x11
conv_4 124 508 4 23x23
inception_a1 62 254 8 31x31 / 63x63
inception_a2 62 254 8 31x31 / 95x95
inception_a3 62 254 8 31x31 / 127x127
inception_b1 31 127 16 47x47 / 351x351
inception_b2 31 127 16 47x47 / 543x543
inception_b3 31 127 16 47x47 / 735x735
inception_b4 31 127 16 47x47 / 927x927
inception_c1 15 63 32 79x79 / 1183x1183
inception_c2 15 63 32 79x79 / 1311x1311
TABLE I: Import layer sizes of Inception_v3 and the corresponding feature map size. After the two inception c modules, the original image sized is reduced by a factor of 32.

leading to

(4)

In order to reach a step size of is necessary. In consequence, a maximum stride of is needed to guarantee a detection of objects with a width of 5 pixels. As seen in Table I, only layer conv_1 - conv_3 can satisfy this condition. From experience, those layers do not provide strong enough features for accurate detection and a small number of false positives.
Priorbox stride adaption: Therefore, we propose to adapt the prior box centers. We create an arbitrary number of priors per feature cell which can be described by

(5)

where and

are offset vectors in the feature cell domain. We propose to define

due to the results obtained from Equation (4). Figure 2(b) illustrates examples for possible priors (red) in one single cell (blue) with an aspect ratio of 3. By means of this adaption, the stride is independent of the feature layer size.

Fig. 4: Traffic light detection needs context information. Especially small objects are not distinguishable from false positives, such as rear lights of vehicles.

Iii-D Receptive Field: How Much Context is Needed?

The receptive field of CNNs can be explained as the region in the input image a feature is ”looking at”. Increasing the receptive field is done by either applying kernels in a convolutional layers or using pooling layers.
The core idea behind SSD is to detect larger objects in late layers as late layers ”look” at larger regions in the input image. Nevertheless, besides the size of the object, different object types require more context information than others.
Traffic lights have a visual appearance, which seems to be unique and easy to detect at first glance. However, Figure 4 illustrates that traffic lights are hard to differentiate from background without any context information. Especially at rear lights of vehicles, window panes or in trees many potential false positives appear. Adding context information as in Figure 4 makes better delimitation possible. For a better understanding the receptive field of important layers is given in Table I. Please note, that we analyzed the theoretical instead of effective receptive field, see [20]. The receptive field for each layer is calculated as

(6)

where is the receptive field of the previous layer, is the kernel size and is the stride, i.e. ratio between feature layer size and the input image.
Feature layer concatenation: Adding context helps to reduce false positives but can also lead to a loss of an accurate position of the bounding box location. Furthermore, state determination (color of the lamp) is an information, for which clearly less context is required. In order to obtain all relevant information, we propose to use feature layer concatenation for bounding box prediction as shown in Figure 5. PSPNet [32]

has shown the effectiveness of feature layer concatenation for semantic segmentation. Therefore we combine early and late feature layers so that early layers can provide accurate location and state information and late layers make the decision if traffic light or not. Therefore, the late layers are interpolated. Box and confidence prediction is done similar to original SSD.

inception_c2

inception_b4

inception_a3

interp

interp

concat

conv

conv

conv

conf

bbox

state
Fig. 5: Feature layer concatenation for fusion of local and context information. We concatenate early and late layers and use them for bounding box and confidence prediction. Additional state prediction can be performed by another convolutional layer.

Iii-E Non-Maximum Suppression

In the original SSD, a non-maximum suppression (NMS) is used to suppress multiple detections on a single object. We decode the state of the traffic light as a separate class. Doing the NMS class-wise leads to multiple detections (same position but differing in state and confidence) of one traffic light instance. We adapt the NMS to a class-independent one. The final state of the traffic light is picked from RoI number

(7)

where are all elements assigned to one real object. In other words, we pick the class of the RoI with the highest confidence value. All elements in meet the following constraint i.e. if the overlap of a prediction is larger than a 0.35. We decided for a relatively small overlap threshold as we have a high number of small objects in the dataset and high overlaps are hard to reach for small objects.

(a) original (VGG)
(b) Adapted sizes and ratios
(c) Adapted stride, size and ratios
Fig. 6: Matched ground truths with respect to the bounding box width. The left figure shows the poor coverage of the original SSD prior boxes applied on the DriveU Traffic Light Dataset. After adapting the sizes and aspect ratios, a much better coverage is reached. However small objects can not be covered due to the high stride (16, see Table I). After adaption of the prior box layer, we reach the coverage in (c), in which objects down to a width of 3 pixels are mostly covered.

Iii-F Extensions for State Prediction

We tried two different ways to additionally predict the state of traffic lights. A first approach was to replace the binary classification task by a multi-class task, in which we assign each label to one state class, i.e. . In total SSD predicts 5 classes by adding one additional background class. The network was trained similar to the binary classification task according to Equation (1). Although this enabled a state detection, we recognized a significant decrease in accuracy in the pure detection task (traffic light vs background). One possible explanation is that the foreground confidence is distributed over all states which leads to a more dominant background confidence. In oder to avoid this problem we adapted the SSD approach as follows: The confidence layer still performs the binary classification (traffic light vs background), whereas according to Figure 5 an additional layer performing state prediction is added. Optimization is done by using a separate state loss leading to the overall loss given as

(8)

with a state weight factor . Unlike the confidence loss, which is calculated as a softmax with cross-entropy loss, we use a sigmoid loss function defined as

(9)

over multiple state confidences .  [24] have shown that this can be beneficial for highly correlated classes (such as the gender of humans or the state of traffic lights). This way, the pure detection accuracy is not affected compared to binary training and an additional state prediction is possible.

Iv Experiments

Iv-a Dataset

For training of our TL-SSD approach we use the DriveU Traffic Light Dataset [14]. This dataset is the largest published containing more than 230 000 hand-labeled traffic lights.
Training set: The dataset includes over 300 different classes with very specific tags such as relevancy, color, number of lamps, orientation and pictogram (pedestrian, tram, arrows …). For practical usage of a traffic light detection system, mainly front orientated traffic lights, i.e. traffic lights facing the vehicles road have to be detected. Traffic lights for pedestrians, trams or turned traffic lights (valid for the oncoming traffic) are negligible. During training, we calculate the confidence loss as a two-class problem (traffic light vs. background) and the state loss as a 4-class problem (red, yellow, green, off).

Cities Images Sequences Objects
Training 11 28526 1478 159902
Evaluation 11 12453 632 72137
TABLE II: DTLD statistics used for training and evaluation.

Evaluation set: For evaluation we use the proposed split of the dataset. Thus, the evaluation set contains around one third of all annotations. Detailed statistics of both sets can be seen in Table II. The term sequences describes one unique intersection with a varying number of unique traffic lights.
Limitations of the Evaluation Set: Varying label rules is one key problem of the dataset for the purpose of evaluation. The majority of the images are annotated with front-facing traffic lights only. A small part also contains annotated turned traffic lights (e.g. for pedestrians or oncoming traffic). Detections on those traffic lights are counted as false positives as they are not consistently annotated. Nevertheless, it is the largest and most carefully annotated dataset predestined for our use-case.
Don’t-care (dc) objects: For evaluation we apply several filters. All traffic lights not tagged as front, i.e. traffic lights not valid for the direction of the vehicle, are set as dc objects. Traffic lights valid for pedestrians, cyclists, trams or buses are also tagged as dc. In some experiments we use a minimum detection width and also tag all smaller annotations as dc. Detections on dc objects are not counted as false positives, but not detected dc objects are also not counted as false negatives.

Iv-B Metrics

Recall/True positive rate: We express the percentage number of detected traffic lights by the true positive rate, or also called recall defined as , where a true positive (TP) is counted for an overlap threshold IoU larger than 0.3 or 0.5 according to Formula (2). False positives (FP) are counted for predictions not overlapping with a ground truth by the defined overlap threshold. Multiple detections on one ground truth object are also counted as false positives. We evaluate all trainings by an ROC curve (miss rate vs. false positives per image (FPPI)). The running parameter of the ROC curve is the confidence threshold , which is between 0 and 1. We evaluate for different IoU threshold values. Thus, the ROC curve can be written as .
Log-average miss rate: In order to compare trainings by one single metric, we use the log-average miss rate

(10)

where and three characteristic points of the ROC curve are picked. It is a metric also used in many popular pedestrian detection publications (see [8],[9]). We picked FPPI values of as they correspond to suitable operating points for subsequent modules (e.g. tracking).

Iv-C Coverage between Prior Boxes and Ground Truths

SSD only trains ground truths covered by at least one prior box with a minimum intersection over union value. We choose an IoU threshold of 0.3 to also cover very small annotations. Figure 6(a) illustrates the poor coverage without any adaptations using the original SSD parameters. Figure (b) shows improved coverage when adapting the size and aspect ratio of the prior boxes. We picked a fixed aspect ratio of 0.3 and multiple widths from 4 up to 38 pixels. However, as prior boxes are generated in layer inception_b4, the stride (16, see Table I) is too high to also cover small objects. With our adaptations described in Section III-C the final coverage of Figure 6(c) can be reached. We choose the offset vectors (Equation (5)) according to the derived allowed step size of Equation (4) leading to offsets of 0.16 in this layer, which corresponds to 2-3 pixels in width and 6-9 pixels in height.

Iv-D Deeper Network - Better results?

For this experiment, we trained networks with one prior box and prediction layer only creating all desired sizes. In order to guarantee a fair comparison, we adapted the step offset parameters (half step offset in b-layers because of half stride compared to c-layers). Figure 7 illustrates the log-average miss rate for different prior box depth. As expected, too early layers have weak features and less context. Late layers have a too large context leading to a loss in detailed information. The sweet spot is layer inception_b4 with a LAMR of 0.02.

Fig. 7: Influence of the network depth on the LAMR result. Predicting boxes in Inception_b4 yields the best results.

Iv-E Using multiple output layers

Original SSD achieved better results by using multiple layers for bounding box prediction. We did several experiments on using multiple layers for detection, in which later layers have to detect larger traffic lights and smaller layers have to detect smaller objects. Results can be seen in Table III. The results do not correspond to the findings of the SSD authors. The best results are comparable to the best results of one layer only. One possible explanation is that all objects are in a comparable size range. Furthermore, requirements on the receptive field are approximately equal as small objects typically need a higher receptive field than large objects.

Iv-F Does more data help?

One common statement about CNNs is that more data automatically helps to improve generalization and overall results. To investigate the impact of the amount of data on the recall, we take the best model of the previous result. We generate prior boxes in layer inception_b4. We generate four sub-training sets consisting of 25, 50, 75 and 100 % of the training set. Figure 8 illustrates the LAMR result for all four subsets. An almost linear relation can be seen, which clarifies than even more data would enhance the detection results. The point of saturation is not yet reached.

Fig. 8: Influence of the amount of data on the LAMR.
Prior Box Layer LAMR (IoU=0.5) Runtime [ms]

Inception_b1

Inception_b2

Inception_b3

Inception_b4

Inception_c1

Inception_c2

0.064 101
0.060 106
0.017 111
   0.046 105
0.029 133
0.031 145
0.016 121
0.015 117
0.020 122
0.016 165
0.015 119

* concatenation of layers
without stride adaption

TABLE III: All results of TL-SSD. Check marks illustrate on top of which layer(s) predictions are made. Our prior box stride adaptations increased LAMR by 4 percent in layer Inception_b4.

Iv-G Results over ground truth width

The detection results with respect to the distance of an object is of particular interest for autonomous driving. In case of traffic lights, a comfortable braking is desired, which requires a detection at high distances. Figure 8(b) illustrates the detection results with respect to the ground truth width. The respective detection rate is written in red, whereas the absolute number of ground truths at the respective width is written in white. With rising FPPI, the recall increases especially for very small objects. A saturation occurs from approximately ten pixels in width with high recall values.

(a) ROC curves for the pure detection result
(b) Detailed detection result for
Fig. 9: The left figure shows ROC curves for the traffic light states green, yellow, red and all. The discriminant threshold is the confidence value for each prediction, which is between 0 and 1.0. Green traffic lights show a lower miss rate than red traffic lights. Results for a required overlap of 0.3 are clearly better than 0.5, as high overlaps are hardly reachable for small objects. The right figure shows detailed results with respect to the ground truth width. Results are plotted for IoU=0.3 and FPPI=1.0. Recall increases with width. Red numbers show the exact recall value, white values show the absolute number of annotations at the specific width.

Iv-H Track-wise Evaluation

DTLD additionally contains track identities, which group unique traffic light instances over multiple frames. A track wise evaluation is of particular interest for a potential system as tracking can compensate missing detections and thus the percentage of detected track is an interesting result. Figure 10 shows calculated as the number of correct detection of one single track divided by its number of occurrences (i.e. number of frames, in which the object appears). Results show, that for higher FPPI values increases up to 95 percent. In other words, 95 percent of all objects in DTLD are detected in almost all frames they occur. There still exist several tracks, which are detected not even once. However, a closer look shows that, those cases are often tracks only appearing in one single frame or very rare cases (like mirror images of traffic lights).

V Conclusion

In this paper we presented the TL-SSD, an adaption of a single shot detector for traffic light detection and small object detection in general. We adapted the base network from VGG-16 to a faster and more accurate Inception_v3. Furthermore, we adapted the prior box generation to allow smaller stride in late network layers, which is essential for small object detection. We proved this in a theoretical manner. An adaption of the non-maximum suppression helps to avoid multiple detections on a single object. Furthermore we predict the state of the traffic lights using an additional branch. Extensive experiments on the DriveU Traffic Light Dataset were presented, which analyzed the properties of our method. We evaluated different operating points differing in the number of false positives per image. We showed, that more data leads to better results and the network depth has to be chosen carefully. Recall values up to 95 percent even for small objects were reached, values increase up to 98-100 percent for larger objects at false positve rates between 0.1 and 10 FPPI.

Fig. 10: Evaluation of the percentage track detection rate. For higher false positive rates, around 95 percent of the objects are detected in 90-100 percent of the frames they appear.

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results1_new.png

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results2_new.png

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results3_new.png

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results4_new.png

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results5_new.png

[width=Clip=.000pt 0.00pt 0.00pt .00pt]fig/results6_new.png

Fig. 11: Prediction results from our trained SSD on the DriveU Traffic Light Dataset. The color of the bounding box indicates the predicted state. Very small objects with a few pixels in width can be detected after all adaptations described in this paper.

References

  • [1] M. Bach, S. Reuter, and K. Dietmayer.

    Multi-camera traffic light recognition using a classifying labeled multi-bernoulli filter.

    In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 1045–1051, June 2017.
  • [2] D. Barnes, W. Maddern, and I. Posner. Exploiting 3D semantic scene priors for online traffic light interpretation. IEEE Intelligent Vehicles Symposium, Proceedings, 2015-Augus:573–578, 2015.
  • [3] K. Behrendt, L. Novak, and R. Botros. A deep learning approach to traffic lights: Detection, tracking, and classification. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1370–1377, May 2017.
  • [4] A. Canziani, A. Paszke, and E. Culurciello. An Analysis of Deep Neural Network Models for Practical Applications. CoRR, abs/1605.07678, 2016.
  • [5] C. C. Chiang, M. C. Ho, H. S. Liao, A. Pratama, and W. C. Syu. Detecting and recognizing traffic lights by genetic approximate ellipse detection and spatial texture layouts. International Journal of Innovative Computing, Information and Control, 7(12):6919–6934, 2011.
  • [6] R. De Charette and F. Nashashibi. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates. IEEE Intelligent Vehicles Symposium, Proceedings, pages 358–363, 2009.
  • [7] M. Diaz-Cabrera, P. Cerri, and J. Sanchez-Medina.

    Suspended traffic lights detection and distance estimation using color features.

    In IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, pages 1315–1320, Anchorage, AK, 2012. IEEE.
  • [8] P. Dollar, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: A benchmark. In

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    , pages 304–311, June 2009.
  • [9] P. Dollar, C. Wojek, B. Schiele, and P. Perona. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4):743–761, April 2012.
  • [10] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable Object Detection Using Deep Neural Networks. Cvpr, pages 2155–2162, 2014.
  • [11] N. Fairfield and C. Urmson. Traffic light mapping and detection. In Proceedings of ICRA 2011, 2011.
  • [12] A. Fregin, J. Müller, and K. Dietmayer. Feature detectors for traffic light recognition. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 339–346, Oct 2017.
  • [13] A. Fregin, J. Müller, and K. Dietmayer. Three ways of using stereo vision for traffic light recognition. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 430–436, June 2017.
  • [14] A. Fregin, J. Müller, U. Kressel, and K. Dietmayer. The DriveU Traffic Light Dataset : Introduction and Comparison with Existing Datasets. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21 - May 25, 2018, forthcoming.
  • [15] J. Gong, Y. Jiang, G. Xiong, C. Guan, G. Tao, and H. Chen. The recognition and tracking of traffic lights based on color segmentation and CAMSHIFT for intelligent vehicles. 2010 IEEE Intelligent Vehicles Symposium, pages 431–435, 2010.
  • [16] H. Kim, Y. Shin, S.-g. Kuk, J. Park, and H. Jung.

    Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features.

    Waset.Org, 7(4):454–457, 2013.
  • [17] J. Levinson, J. Askeland, J. Dolson, and S. Thrun. Traffic light mapping, localization, and state detection for autonomous vehicles. Proceedings - IEEE International Conference on Robotics and Automation, pages 5784–5791, 2011.
  • [18] F. Lindner, U. Kressel, and S. Kaelberer. Robust recognition of traffic signals. In IEEE Intelligent Vehicles Symposium, 2004, pages 49–53, June 2004.
  • [19] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. E. Reed. SSD: Single Shot MultiBox Detector. CoRR, abs/1512.0(1), 2015.
  • [20] W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pages 4905–4913, USA, 2016. Curran Associates Inc.
  • [21] J. Müller, A. Fregin, and K. Dietmayer. Multi-camera system for traffic light detection: About camera setup and mapping of detections. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 165–172, Oct 2017.
  • [22] M. Omachi and S. Omachi. Traffic light detection with color and edge information. In Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International Conference on, pages 284–287, Aug 2009.
  • [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You Only Look Once: Unified, Real-Time Object Detection.
  • [24] J. Redmon and A. Farhadi. YOLOv3: An Incremental Improvement. CoRR, abs/1804.02767, 2018.
  • [25] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. pages 1–14, 2015.
  • [26] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. 2013.
  • [27] C. Szegedy, S. Reed, P. Sermanet, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. pages 1–12.
  • [28] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception Architecture for Computer Vision. CoRR, abs/1512.00567, 2015.
  • [29] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective Search for Object Recognition. International Journal of Computer Vision, 2013.
  • [30] M. Omachi and S. Omachi. Detection of traffic light using structural information. Signal Processing (ICSP), 2010 IEEE 10th International Conference on, 2:809–812, 2010.
  • [31] M. Weber, P. Wolf, and J. M. Zöllner. DeepTLR: A single deep convolutional network for detection and classification of traffic lights. In 2016 IEEE Intelligent Vehicles Symposium (IV), pages 342–348, June 2016.
  • [32] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid Scene Parsing Network. 2016.
  • [33] L. Zitnick and P. Dollar. Edge Boxes: Locating Object Proposals from Edges. In ECCV. European Conference on Computer Vision, September 2014.