Dynamic Approach for Lane Detection using Google Street View and CNN

09/02/2019 ∙ by Rama Sai Mamidala, et al. ∙ 0

Lane detection algorithms have been the key enablers for a fully-assistive and autonomous navigation systems. In this paper, a novel and pragmatic approach for lane detection is proposed using a convolutional neural network (CNN) model based on SegNet encoder-decoder architecture. The encoder block renders low-resolution feature maps of the input and the decoder block provides pixel-wise classification from the feature maps. The proposed model has been trained over 2000 image data-set and tested against their corresponding ground-truth provided in the data-set for evaluation. To enable real-time navigation, we extend our model's predictions interfacing it with the existing Google APIs evaluating the metrics of the model tuning the hyper-parameters. The novelty of this approach lies in the integration of existing segNet architecture with google APIs. This interface makes it handy for assistive robotic systems. The observed results show that the proposed method is robust under challenging occlusion conditions due to pre-processing involved and gives superior performance when compared to the existing methods.



There are no comments yet.


page 2

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently most of the high-end cars come with advanced features like semi-autonomous driving systems, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, automatic emergency braking, self-parking, night vision and back up cameras. This intervention of machines in human driving system making it more free from traffic hassle and accidents is the underneath motivation. With five of all the six levels of autonomy (L1-L5) defined by the NHTSA (US National Highway Traffic Safety Administration) has the lane detection as the primary requirement [24] in occluded scenes. Tesla’s Autopilot and Mercedes E-class’s Drive pilot are some of the best examples for breakthrough in semi-autonomous driving systems. Allowing the car/vehicle to access the location of the system is a foundation for any navigational system.

Many of the proposed techniques for lane detection use traditional computer vision and image processing techniques

[7]. They often incorporate highly-specialized and hand-crafted features, but with worst computational and run-time complexities. The introduction of deep neural networks have become very handy to incorporate more number of feature maps for a better efficient model [7]. Convolutional Neural Network (CNN) is the primitive deep-net model that has a wide scope of variations in the architecture based on the architectural parameters [13]. We interface our deep-net models to the existing Google APIs like Google maps, street-view, and geo-locate APIs to access real-time data for navigation. This is one of the major contributions of this paper. The models obtained through deep-nets are quite efficient when trained over a high performance GPU (Graphic processing unit). A cluster-based GPU has been used for training our networks using SLURM management system, as in [15].
The retrieval of images from Google Street-View is quite a powerful alternative rather than camera-based images in certain applications like drawing out the coverage of plantation in a region or detecting traffic signs/figures on the lane and take steps to improve the awareness. Mainly for the earlier stages of development huge image data-sets are available for many different countries all over the world (2) which can be accessed easily and efficiently for further developments on them as depicted in this paper.

Google API’s are integrated with our model to facilitate the real-time data using proper authentication key and certain python libraries (like geolocate and google_streetview). To demonstrate the efficacy of SegNet, we present a real-time approach of lane segmentation for assistive navigation which can be scaled to more number of classes for autonomous navigation [17] based on the use-case. The data-set utilized for training the proposed network was created by the researchers from the University of Wollongong, Australia [1]. We trained our deep net models by tuning the hyper-parameters, and monitoring the learning curves (3 and 4) for proper convergence. We evaluated the models in real-time based on the mean-squared between the lane segmented and the lane expected images.

Ii Existing methods

In the last two decades, there has been a huge interest in automatic lane detection systems [3, 12, 16]. Though there are a lot of existing methods for lane detection in marked roads [2] and even for unmarked roads [4, 5]

, they didn’t consider the real-time obstacles such as occlusion in the roads, bad weather conditions, and varied intensity images. Before the introduction of deep networks, some of the efficient models with best performance relied on certain machine-learning classifiers like, Gaussian-mixture model (GMM)


, support-vector machine model (SVM)


and multi-layered Perceptron classifier (MLPC)

[7, 20]. The supervised classifiers, SVM and MLPC, both follow back-propagation [7]

approach for the convergence of the learning rate (alpha) and the weights of the hidden layers respectively. Random forest techniques have been used along certain boosting techniques as shown in

[23], but the CNN’s due to their capture and learn more number of features compared to any random-forest algorithm with variant boosting techniques is less preferred compared to CNN as shown in [20]. Further for the initialization of weights and learning rate, Xavier initialization Eq(1)is used, which follows an intuitive approach of finding the distribution of the random-weights. Several dynamic approaches [17, 18] have been introduced depending on the type of application, of which, lane detection using Open-Street-Map (OSM) [8] is the most-widely used tool to obtain the test image based on the geographical data. But, OSM cannot provide the current location of the system as shown in [21], which makes it incapable of obtaining the test image of the current location of the system. This drawback can be made overcome by using certain google APIs by which, the current location of the system can be obtained.

Fig. 1: Illustration of Segnet architecture with the input image and the final expected output image.

Iii Proposed Method

Iii-a Segnet Model

Unlike the models [1, 2, 3, 4] that have been proposed so-far, CNN allow networks to have fewer weights (as these parameters are shared), and they are given a very effective tool - convolutions - for image processing. As with image classification, convolutional neural networks (CNN) works great on semantic-segmentation problems[9], even when the background has identical features like the lane. Hence, the current issue of lane segmentation, which comes under semantic segmentation can be solved effectively by using an encoder-decoder type segnet architecture, as shown in Fig.1

. The current model has clusters of layers, where each cluster consists of a convolutional layer, activation (rectified linear unit/ ReLu, maps the input matrix with a function that remove the linearity of the model and provides scope to learn more complex functions) and a pooling layer, followed by a de-convolution block (cluster of layers consisting of up-sampling, activation and convolutional layers) after encoding block. The main feature of the SegNet is the use of max-pooling indices in the decoders to perform up-sampling of low resolution feature maps. This retains high frequency details in the segmented images and also reduces the total number of trainable parameters in the decoders. The process of training from scratch gets started by proper initialization of weights, using the proposed Xavier initialization since, no data will be available by default at the beginning of data.

w - indicating the weights of the i kernel

b - indicating the bias/threshold of the respective neuron and

y - indicating the linear combination of the weight matrix (w

) and input tensor(x

), which can be represented as follows:


Equating the variance along ’x’ and ’y’, we obtain the following equation:

Fig. 2: Availability of Google Street View images, till 2018 http://support.google.com

where, ’N’ indicates the dimension of image and the weights corresponding to the Gaussian distribution of variance

are randomly chosen from the result obtained in Eq(2).

Iii-B Training the Model and Tuning the Hyper-parameters

The model has been trained using dataset created by researchers from University of Wollongong. The dataset contains 2000 RGB images (resolution: ). We randomly selected around 1500 images of high variance and the rest utilized for performance evaluation setting up different hyper-parameters. To reduce the time-complexity the size of the image is reduced to

resolution. Since the lane classification comes under binary class, the whole data-set is one-hot encoded


and saved before training the model. The efficiency and the time-complexity of the model depends on the hyper-parameters i.e. epochs and batch-size, where, the batch-size corresponds to the number of images that enter the network at a time and acts as a vital factor to influence the time-complexity; and the number of epochs correspond to the number of times, the whole data-set traverses through the whole network model to avoid under-fitting of the model and enhance the training efficiency of the model. We have trained the network using the open source deep learning framework Keras

[14] on an online Odyssey cluster [15] based GPU. The proposed model has been trained against 100, 300 and 500 images with the number of epochs 30,30,40 respectively and constant batch-size (20). Finally, the whole data-set (1500 images) with 80 and 115 as the number of epochs and constant batch-size to examine the variations in efficiency and time-complexity with the architectural parameters stored as .JSON and the weights at each layer in a hierarchical data format (HDF5).

Fig. 3: Learning curves (validation accuracy in red and training accuracy in blue) corresponding to the model with 500 training images.
Fig. 4: Learning curves (validation accuracy in red and training accuracy in blue) corresponding to the model with 1500 training images.

Iii-C The Dynamic Approach for Segmentation

After training the model with the available data-set of 1500 distinct images over GPU, the weights of the trained model are saved and the model is evaluated to get the confusion matrix metrics as shown in Table.


and test-accuracy using the loss function (mean-squared error) as given in (

Assistive or autonomous navigation of a robotic system is possible only with the real-time data of the system that can be achieved by the knowledge of location, apart from using camera-based real-time data [10, 11]. Google API’s provide us access the real-time data based on proper authentication key. For the present model, geo-location based lane segmentation can be achieved using OSM API [14] or Google Maps API, of which, OSM is the most accepted API for it’s job. But, the current location of the system, which cannot be obtained using OSM can be achieved using certain Google APIs. Hence, in the current stage, Google geo-locate, Google maps and Google street-view APIs have been used to obtain the street-view image of the system, based on it’s geographical location, as shown in Fig.5. In this approach, Google geo-locate becomes handy to obtain the current location, followed by Google maps API to obtain it’s geographical information, followed by Google Street-View API to obtain the test-image, which is tested on the saved Segnet model architecture to obtain a lane segmented image.

Fig. 5: The above figure represents the street-view image (right) obtained, based on the location (left) of the system.

Iii-D Performance Evaluation

Validation accuracy has been evaluated at each and every epoch to know the training progress of the model, using the following equation:


where, N - indicates the count of test data-set
M - indicates the dimension of the input kernel and
f,y - indicate the pixel value of expected and observed images, respectively.
To quantify the accuracy the confusion matrix[22] was obtained through which true positive (n), false positive (n), false negative (n) and true negative (n) are extracted to evaluate the precision (4), recall (5), accuracy (6) and F-measure (7) using the following equations:

Training data-set Epochs Testing Data-set Test-accuracy
300 30 100 83.4 %
500 40 300 85.5 %
1500 80 500 94.7 %
1500 115 500 96.1 %
TABLE I: Performance of the current Segnet model
Epochs (For 1500 data-set) Precision Recall Test-accuracy -measure
30 0.8990 0.8732 0.8340 0.9346
40 0.9334 0.8662 0.8551 0.9495
80 0.9754 0.7246 0.9470 0.9493
115 0.9889 0.4042 0.9610 0.9445
TABLE II: Comparison of metrics across different epochs

Similarly, certain distinctive test data-sets have been evaluated over different hyper-parameters to obtain the over-all accuracy of the model and the results had been tabulated in Table.I. We observed a significant improvement in the results in comparison to the existing methods like SVM, MLPC, as shown in Fig.6 and a few other adaptive algorithms, as in [1, 2].

Fig. 6: The results of a distinct set of images (col.1) have been listed in above block of figures, with a quite noticeable improvement in the results obtained through CNN (col.5), compared to SVM (col.4) and MLPC (col.3).
Fig. 7: Results representing the dynamic approach, with, Google Street-view images (row -1) in the premises of our institute, and the observed lane-segmented images (row-2).
Fig. 8: The above picture represents the results (row -3) obtained using CNN model, of certain exceptional images (row -1), with curves, tiles, indoor-image and image with illumination variation (from left to right in order) along with the corresponding ground-truths (row - 2).

Iv Results and Discussion

To start with, we used smaller data-set of 100 images tested on the model trained with 300 images (Table I), on a system with specifications of I7 processor, 8GB RAM and 4GB Nvidia G-force GPU. But, the time-complexity hasn’t reduced when compared to the previous models. Hence, the model has been implemented on a 8-core cluster based online (cloud) GPU with low time-complexity. The whole data of validation and training accuracies at each epoch, error data if any, along with finals results can be viewed/accessed online using the credentials. With this support, the model has been trained on data-sets of 500 and 1500 images with their labelled ground-truth data, followed by validation on 200 and 300 images respectively (Table I), and the test-accuracy is obtained using the rest of data-set of 300 and 500 images respectively. As depicted in (Table II), we notice that recall descends with the data-set unlike precision. This is because the classification performed by the model is a pixel-wise classification of occluded images, which is of very high order per image and the reason the saturated recall value for the available data-set is 0.4042 which is not the best for the available data-set due to surplus hardware. At last, both the training and validation accuracies have converged to optimal values (as in Fig.3 and Fig.4). Fig.6 and Fig.8 show distinct set of images with occlusions, in these we can observe certain noises (like salt & pepper) on the results obtained using SVM and MLPC. This is caused by in-efficient training of the model which is due to it’s high time-complexity for small data-sets. As tabulated in [Table I], effective training with larger data-sets and better efficiency ( 97%) is made possible with CNN’s. The images obtained through a set of Google APIs, near the surroundings of our institute (Fig.7) have been tested with the correlation coefficient approximated to value 1.

V Conclusions

Most of the recent published work on lane detection is based on marked roads and even the reported work on un-marked-roads do not consider real-time occlusions and are not sufficiently dynamic for real-time implementations. In this paper, SegNet architecture has been compared with other variants to reveal the practical trade-offs involved in designing architectures for segmentation. A dynamic lane detection approach that uses the real-time street images based on the geo-location of the system through Google Street-View API has been proposed in this paper. This approach of interfacing our deep learning model with Google Street-View API is more advantageous compared to OSM, due to it’s ability to obtain current location of the system. The current proposed model has the scope of extension to future driver-assistance systems when trained with multiple number of classes for effective lane and obstacle segmentation. By using random-feature set and training data and evaluating different models and converging them to a single stable/adaptive model can be made possible when trained over GPU’s like Nvidia TITAN V/X. Also, this can be extended to build an electronic cane for blind people using the data obtained from camera attached to the cane.


  • [1] S. Phung, M. Cuong. Le & A. Bouzerdoum, Pedestrian lane detection in unstructured scenes for assistive navigation,” Computer Vision and Image Understanding, vol. 149, pp. 186-196, 2016.
  • [2] Shehan Fernando, Lanka Udawatta, Ben Horan and Pubudu Pathirana, ”Real-time Lane Detection on Suburban Streets using Visual Cue Integration”, International Journal of Advanced Robotic systems, 17 Jan. 2014.
  • [3] J.D.Crisman,C.E.Thorpe, SCARF: A color vision system that tracks roads and intersections,IEEE Trans. on Robotics and Automation, 9(1):49-58, Feb. 1993.
  • [4] D K Savitha and Subrata Rakshit, Gaussian Mixture Model based road signature classification for robot navigation, Emerging Trends in Robotics and Communication Technologies (INTERACT), 2010 ”IEEE International Conference on Human Computer interface”, 31st Jan, 2011.
  • [5] Hao Zhang, Dibo Hou, and Zekui Zhou, A Novel Lane Detection Algorithm Based on Support Vector Machine, Progress In Electromagnetics Research Symposium 2005, Hangzhou, China, August 22-26.
  • [6] Shengyan Zhou, Jianwei Gong and Guangming Xiong, ”Road detection using support vector machine based on online learning and evaluation”,IEEE Intelligent Vehicles Symposium(IV), june 2010.
  • [7] Cristian Tudor Tudorann, Victor-Emil Neagoe, ”A New Neural Network Approach for Visual Autonomous Road Following”, Bucharest.
  • [8] Yan Jiang, Feng Gao , Guoyan Xu, Computer vision based multiple lane detection on Straight road and in curve, Beijing, China, Jan, 2011.
  • [9] Vijay Badrinarayanan, Alex Kendall, Rober toCipolla, Deep Convolutional Encoder-Decoder Architecture for Image Segmentation, 8 Dec. 2015.
  • [10]

    Alexandru Gurghian, Tejaswi Koduri, Kyle J. Carey, Vidya N.Murali, ”End-To-End Lane Position Estimation using Deep Neural Networks”,

    Computer Vision and Pattern Recognition- IEEE 2016

  • [11] Jihun Kim , Minho Lee, ”Robust Lane Detection Based On Convolutional Neural Network and Random sample”, SPRINGER International conference paper on(Neural Information Processing in 2014.
  • [12] V.Jonathan Long, Evan Shelhamer,Trevor Darrell,Fully Convolutional Networks for Semantic Segmentation at cs.berkeley.edu.
  • [13] Image processing techniques from a book available on [Online]: ”gitbooks.io/artificial-inteligence/content/imagesegmentation”.
  • [14] Keras Documentation (https://keras.io/) and OpenStreetMap Documentation.
  • [15] Simple Linux utility for resource management (SLURM), Available at [Online]: https://www.rc.fas.harvard.edu/resources/running-jobs/.
  • [16] Fernando Arce a, Erik Zamora b, Gerardo Hernández a, Humberto Sossa, ”Efficient Lane detection based on Artificial Neural networks”, 2nd International Conference on Smart Data and Smart Cities, 4–6 October 2017, Puebla, Mexico.
  • [17] Fisher, Inside Google’s Quest To Popularize Self-Driving Cars, Popular Science article, Available at: http://www.popsci.com/cars/article/2013-09/google-self-driving-car.
  • [18] Hao Li, Fawzi Nashashibi, ”Robust real-time lane detection based on lane mark segment features and general a-prior knowledge”, International Conference on Robotics and Biomimetrics, December 7-11, 2011, Phuket, Thailand.
  • [19] J.E. Shoenfelt, C.C. Tappert, A.J. Goetze, ”Techniques for Efficient Encoding of Features in Pattern Recognition”, Dept.of Electrical Engg. North Carolina State University,IEEE Computer Society, March, 2011.
  • [20] Fernando Arce a, Erik Zamora b, Gerardo Hernández a, Humberto Sossa, ”Efficient Lane detection based on Arificial Neural networks”, 2nd International Conference on Smart Data and Smart Cities, 4–6 October 2017, Puebla, Mexico.
  • [21] Aleks Buczkowski, ”Comparision between Google Maps API and OpenStreetMap (O.S.M) API for geographical location”, http://geoawesomeness.com.
  • [22] Peter Flach, Performance Evaluation in Machine Learning: The Good, The Bad, The Ugly and The Way Forward, Intelligent Systems Laboratory, University of Bristol, 2019, The Alan Turing Institute, London, UK.
  • [23] Liang Xiao, Bin Dai, Daxue Liu, Dawei Zhao, Tao Wu,”Monocular Road Detection Using Structured Random Forest”, International journal for advanced robotics, First published January 1,2016.
  • [24] Péter Szikora Ph.D., Nikolett Madarász Óbuda University, Keleti Faculty of Business and Management, Budapest, Hungary, Self-driving cars — The human side, 14th IEEE International Scientific Conference on Informatics, November, 2017.