The traffic surveillance system is broadly used to monitor traffic conditions. Vehicle detection plays significant roles in many vision-based traffic surveillance applications. Vehicles need to be located based on the videos or images of the traffic scene. Some further processing, such as vehicle tracking and vehicle counting, could be developed based on the obtained specified location of the vehicle. The extracted bounding box, which contains the vehicle, can also be collected for other usage, such as vehicle type and model recognition. However, there are still a fairly amount of concerns with the development of vehicle detection technology. One of them is the occlusion, which place resistance to accurate vehicle detection. In the real application scenario of traffic surveillance systems, detection performance could also be influenced by different illumination and weather conditions. Vehicles and other objects could bring shadows, which easily give rise to false positives in detection procedures. Different vehicles may diversify in shape, size and color. Various pose may generate different appearance for the vehicle, which make vehicle detection even more challenging. Previously, different feature extraction techniques have been employed for vehicle detection,relying on the rigid characteristic of vehicle and unique part based features. Recently, Convolutional Neural Network(CNN) has been proved to be a promising approach for feature extraction of Region of Interest in images. There are many CNN based methods, which have been proposed for vehicle detection and classification [3, 4, 5]. Nonetheless, superior detection accuracy and low processing time latency could hardly be achieved at the same time.
In this study, we deploy a Focal Loss Convolutional Neural Network based object detection method-RetinaNet to undertake the vehicle detection task for DETRAC dataset. Our experiment result show that the RetinaNet could be well adjusted to perform faster and more accurate vehicle detections compared to previous other methods.
Ii Overview of Focal Loss Dense Object Detector
Ii-a Evolution of object detection method
There are a group of traditional and classic object detection methods developed with history. Firstly, the sliding-window approach is proposed, through which a classifier is applied on a dense image grid. Some of the representative work are the study[8, 9]
, which utilize Convolutional Neural Networks for handwritten digit recognition. The usage of boosted object detectors for face detection has been explored in, which make the proposed methods widely accepted in the related area. The study of integral channel features and HOG lead to breakthrough for pedestrian detection. The method of DPMs is able to make dense detectors applicable to general object detection, which continuously achieve remarkable results on PASCAL. However, with the revival of deep learning based methods for computer vision, the traditional sliding-window approach was replaced by the unrivaled two-stage detectors, which dominate object detection lately.
For two-stage object detector, the Selective Search method is the earlier work which utilize the first stage to generate sparse proposals which may potentially include objects inside and the second stage to classify the proposal as foreground or background. R-CNN is able to leverage Convolutional Neural Network for the second stage classification task and achieve even higher accuracy in object detection. R-CNN make each object proposal in an image to pass through CNN independently for feature extraction, which lead to large time latency when executing object detection work. In order to accelerate, the SPPnet and the Fast R-CNN pass through the CNN only once for the entire input image. For the further development of two-stage object detector, in the work of Faster R-CNN, the first stage of Convolutional Neural Network(CNN) is used for generating Region of Interest(ROI) proposal; the second stage of CNN is used for both region proposal refining and object classification. The critical part is to make RPN share the full-image convolution features with the detection network. Based on the analysis of Faster R-CNN framework, many improvement work has been deployed[21, 22, 23, 24, 25].
For one-stage object detector, OverFeat was one of the pioneered work based on deep networks. Lately SSD[27, 28] and YOLO[29, 30] are the typical one-stage methods. In their study, Huang discuss and analyze the accuracy and speed trade-offs among different CNN based object detectors. As their work analyzed,normally two-stage object detectors perform more accurate than one-stage object detectors; however, one-stage object detectors exhibit faster speed than two-stage object detectors. Until recently, one-stage detector RetinaNet is able to achieve comparable accuracy as two-stage detector while still maintaining fast speed.
Most one-stage detectors meet with the problem of class imbalance. The detectors usually go through a huge amount of location with only a few of them containing interested objects. Those easy negatives, which include little useful information, make training procedure rather inefficient; on the other side, the easy negatives would produce degenerate training models. Many study[32, 10, 13, 33, 28] employ hard negative mining methodologies to gain more information from hard samples within training procedures. Some more complicated sampling or reweighing methods are explored in . Focal loss introduced in next part is proposed to solve the class imbalance issue.
Ii-B Focal Loss Dense Object Detector
The normal Cross Entropy (CE) loss for binary classification is showed below:
In the above equation, specifies the ground-truth class and. As analyzed in 
, this regular Cross Entropy loss function could easily be influenced by the sample imbalance of foreground and background class, which would unfortunately lead to instability in one-stage object detector training processes. Focal Loss function is proposed to solve this issue.
The Focal Loss could be defined as below.
A weighting factor is incorporated for class 1 and for class -1. As used in Cross Entropy(CE) loss, represent the estimated probability for class 1.The parameter is used to control the speed at which easy examples are down weighted. Previously, with default configuration, equal probability is given to binary classification to output either y = −1 or 1 when initialized. In that case, because of the existence of class imbalance,the loss generated by proportionally dominant class would contribute more to the loss and lead to failing to converge in the training. So in order to further prevent the instability in training, a ‘prior’ variable is introduced, through which the value of estimated by the model for the rare foreground class could be set low,such as 0.01,at the beginning of training.This pre-configuration method could help system avoid diverging in training.
The dataset we use for experiment is DETRAC vehicle dataset. The dataset includes video taken in both daytime and night. It contains different weather conditions, such as sunny, cloudy and rainy situations. Four vehicle categories are defined in the dataset, which are car,bus,van and others(trucks and vehicles with trailers are categorized into others group). The algorithm we use is the RetinaNet proposed in . RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors on MSCOCO dataset. The RetinaNet network architecture uses a Feature Pyramid Network (FPN)  backbone on top of a feedforward ResNet architecture 
to generate a rich, multi-scale convolutional feature pyramid. The base ResNet models are pre-trained on ImageNet. For the final convolutional layer of the classification subnet, we set the bias initialization to, in our experiments. As explained previously, this initialization strategy prevents the large amount of background anchors from generating a large, diverging loss value in the training.
The code was implemented with MXNet
and run on a server equipped with two Intel 10-core Xeon CPU E5-2630 and an NVIDIA Tesla K80 GPU. In our experiment, RetinaNet is trained with stochastic gradient descent (SGD). Unless otherwise specified, all models are trained for 110k iterations with an initial learning rate of 0.01, which is then divided by 10 at 70k and again at 90k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used.
Focal loss has been used as the loss on the output of the classification subnet in RetinaNet. Heuristically, we find the parameter settingand robust for RetinaNet. We choose , to work best for our experiment, which is showed in Table I.
To explore the effect of the focal loss further, an experimental analysis is provided towards the distribution of the loss for a converged training model. For the training configuration, we select RetinaNet with ResNet101 architecture and set the parameter (which obtained 73.79 mAP). We deploy this model randomly to a great amount of testing images and record the predicted probability for around negative samples and positive samples. We collect the focal loss for those negative and positive samples and normalize the sum of loss for each group to one. The normalized loss is then sorted from low to high to obtain Cumulative Distribution Functions(CDF). CDF for positive and negative samples with different settings of are shown in Figure 1. According to the foreground samples result showed in Figure 0(a), it could be found that various settings of has minor effect on CDF. Around 18% of the hardest positive samples occupy roughly half of the positive loss.With increasing, more of the loss gets focused in the top 18% of examples, but the influence is trivial. However, as showed in Figure 0(b),the various settings of affect negative samples significantly. For , the positive and negative CDFs looks fairly similar. But with the value of becoming larger, considerably more weight has been placed on the hard negative samples. It is showed that, with (our training setting), the broad majority of the loss is generated from a small portion of examples. This could help prove that focal loss can attenuate the impact from easy negatives, transferring all attention to the hard negative samples.
In order to compare with previous work, for network architecture, we deployed 3 methods: SSD, Faster R-CNN and RetinaNet. As introduced previously,SSD and RetinaNet work as one-stage object detectors.Faster R-CNN works as a two-stage object detector.We chose either 50 or 101 for the ResNet depth. The accuracy and speed result are showed in Table II. We can find that the focal loss based RetinaNet could achieve higher accuracy than the representative two-stage object detector-Faster R-CNN. In addition, RetinaNet is able to run much faster than the two-stage object detector in terms of inference speed. Figure 2 depicts RetinaNet detection result on DETRAC dataset under different illumination conditions and weather situations.Figure 1(a) and Figure 1(b) show the detection result in different periods of the day,which reflect different lighting conditions: daytime and nighttime. Figure 1(c) and Figure 1(d) show the detection result under different weather conditions: sunny weather and rainy weather. We could find that RetinaNet with focal loss perform well in all these different environment situations.
In this study, we categorized the latest research of Convolutional Neural Network based object detectors into two groups: one-stage object detector and two-stage object detector. As showed by the experiment result, RetinaNet, a one-stage object detector has proved to be able to achieve state-of-the-art performance for vehicle detection compared with other two-stage object detectors.The incorporated focal loss function,which resolve the critical class imbalance issues of normal one-stage object detectors, give rise to the performance boost. More vehicle based patterns may be explored to improve the one-stage object detector further.
-  P. Dollár, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 9, pp. 1627–1645, 2010.
L. Yang, P. Luo, C. Change Loy, and X. Tang, “A large-scale car dataset for
fine-grained categorization and verification,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3973–3981.
-  S. Rujikietgumjorn and N. Watcharapinchai, “Vehicle detection with sub-class training using r-cnn for the ua-detrac benchmark,” in Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on. IEEE, 2017, pp. 1–5.
-  Y. Zhou, H. Nejati, T.-T. Do, N.-M. Cheung, and L. Cheah, “Image-based vehicle analysis using deep neural network: A systematic study,” in Digital Signal Processing (DSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 276–280.
-  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” arXiv preprint arXiv:1708.02002, 2017.
-  L. Wen, D. Du, Z. Cai, Z. Lei, M. Chang, H. Qi, J. Lim, M.-H. Yang, and S. Lyu, “Detrac: A new benchmark and protocol for multi-object tracking,” arXiv preprint arXiv:1511.04136, 2015.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,”Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
-  R. Vaillant, C. Monrocq, and Y. Le Cun, “Original approach for the localisation of objects in images,” IEE Proceedings-Vision, Image and Signal Processing, vol. 141, no. 4, pp. 245–250, 1994.
-  P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1. IEEE, 2001, pp. I–I.
-  P. Dollár, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” 2009.
-  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005, pp. 886–893.
-  P. F. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object detection with deformable part models,” in Computer vision and pattern recognition (CVPR), 2010 IEEE conference on. IEEE, 2010, pp. 2241–2248.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, no. 2, pp. 303–338, 2010.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
-  J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” International journal of computer vision, vol. 104, no. 2, pp. 154–171, 2013.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.
-  R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
-  S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
-  A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta, “Beyond skip connections: Top-down modulation for object detection,” arXiv preprint arXiv:1612.06851, 2016.
-  T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” arXiv preprint arXiv:1612.03144, 2016.
-  J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object detection via region-based fully convolutional networks,” in Advances in neural information processing systems, 2016, pp. 379–387.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” arXiv preprint arXiv:1703.06870, 2017.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” arXiv preprint arXiv:1703.06211, 2017.
-  P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” arXiv preprint arXiv:1312.6229, 2013.
-  C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “Dssd: Deconvolutional single shot detector,” arXiv preprint arXiv:1701.06659, 2017.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
-  J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” arXiv preprint arXiv:1612.08242, 2016.
-  J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” arXiv preprint arXiv:1611.10012, 2016.
-  K.-K. Sung, “Learning and example selection for object and pattern detection,” 1996.
-  A. Shrivastava, A. Gupta, and R. Girshick, “Training region-based object detectors with online hard example mining,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 761–769.
S. Rota Bulo, G. Neuhold, and P. Kontschieder, “Loss max-pooling for semantic image segmentation,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2126–2135.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision. Springer, 2014, pp. 740–755.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015.