Recently most of the high-end cars come with advanced features like semi-autonomous driving systems, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, automatic emergency braking, self-parking, night vision and back up cameras. This intervention of machines in human driving system making it more free from traffic hassle and accidents is the underneath motivation. With five of all the six levels of autonomy (L1-L5) defined by the NHTSA (US National Highway Traffic Safety Administration) has the lane detection as the primary requirement  in occluded scenes. Tesla’s Autopilot and Mercedes E-class’s Drive pilot are some of the best examples for breakthrough in semi-autonomous driving systems. Allowing the car/vehicle to access the location of the system is a foundation for any navigational system.
Many of the proposed techniques for lane detection use traditional computer vision and image processing techniques. They often incorporate highly-specialized and hand-crafted features, but with worst computational and run-time complexities. The introduction of deep neural networks have become very handy to incorporate more number of feature maps for a better efficient model . Convolutional Neural Network (CNN) is the primitive deep-net model that has a wide scope of variations in the architecture based on the architectural parameters . We interface our deep-net models to the existing Google APIs like Google maps, street-view, and geo-locate APIs to access real-time data for navigation. This is one of the major contributions of this paper. The models obtained through deep-nets are quite efficient when trained over a high performance GPU (Graphic processing unit). A cluster-based GPU has been used for training our networks using SLURM management system, as in .
The retrieval of images from Google Street-View is quite a powerful alternative rather than camera-based images in certain applications like drawing out the coverage of plantation in a region or detecting traffic signs/figures on the lane and take steps to improve the awareness. Mainly for the earlier stages of development huge image data-sets are available for many different countries all over the world (2) which can be accessed easily and efﬁciently for further developments on them as depicted in this paper.
Google API’s are integrated with our model to facilitate the real-time data using proper authentication key and certain python libraries (like geolocate and google_streetview). To demonstrate the efficacy of SegNet, we present a real-time approach of lane segmentation for assistive navigation which can be scaled to more number of classes for autonomous navigation  based on the use-case. The data-set utilized for training the proposed network was created by the researchers from the University of Wollongong, Australia . We trained our deep net models by tuning the hyper-parameters, and monitoring the learning curves (3 and 4) for proper convergence. We evaluated the models in real-time based on the mean-squared between the lane segmented and the lane expected images.
Ii Existing methods
In the last two decades, there has been a huge interest in automatic lane detection systems [3, 12, 16]. Though there are a lot of existing methods for lane detection in marked roads  and even for unmarked roads [4, 5]
, they didn’t consider the real-time obstacles such as occlusion in the roads, bad weather conditions, and varied intensity images. Before the introduction of deep networks, some of the efficient models with best performance relied on certain machine-learning classifiers like, Gaussian-mixture model (GMM)
, support-vector machine model (SVM)
and multi-layered Perceptron classifier (MLPC)[7, 20]. The supervised classifiers, SVM and MLPC, both follow back-propagation 
approach for the convergence of the learning rate (alpha) and the weights of the hidden layers respectively. Random forest techniques have been used along certain boosting techniques as shown in, but the CNN’s due to their capture and learn more number of features compared to any random-forest algorithm with variant boosting techniques is less preferred compared to CNN as shown in . Further for the initialization of weights and learning rate, Xavier initialization Eq(1)is used, which follows an intuitive approach of finding the distribution of the random-weights. Several dynamic approaches [17, 18] have been introduced depending on the type of application, of which, lane detection using Open-Street-Map (OSM)  is the most-widely used tool to obtain the test image based on the geographical data. But, OSM cannot provide the current location of the system as shown in , which makes it incapable of obtaining the test image of the current location of the system. This drawback can be made overcome by using certain google APIs by which, the current location of the system can be obtained.
Iii Proposed Method
Iii-a Segnet Model
Unlike the models [1, 2, 3, 4] that have been proposed so-far, CNN allow networks to have fewer weights (as these parameters are shared), and they are given a very effective tool - convolutions - for image processing. As with image classification, convolutional neural networks (CNN) works great on semantic-segmentation problems, even when the background has identical features like the lane. Hence, the current issue of lane segmentation, which comes under semantic segmentation can be solved effectively by using an encoder-decoder type segnet architecture, as shown in Fig.1
. The current model has clusters of layers, where each cluster consists of a convolutional layer, activation (rectified linear unit/ ReLu, maps the input matrix with a function that remove the linearity of the model and provides scope to learn more complex functions) and a pooling layer, followed by a de-convolution block (cluster of layers consisting of up-sampling, activation and convolutional layers) after encoding block. The main feature of the SegNet is the use of max-pooling indices in the decoders to perform up-sampling of low resolution feature maps. This retains high frequency details in the segmented images and also reduces the total number of trainable parameters in the decoders. The process of training from scratch gets started by proper initialization of weights, using the proposed Xavier initialization since, no data will be available by default at the beginning of data.
w - indicating the weights of the i kernel
b - indicating the bias/threshold of the respective neuron and
y - indicating the linear combination of the weight matrix (w
) and input tensor(x), which can be represented as follows:
Equating the variance along ’x’ and ’y’, we obtain the following equation:
Iii-B Training the Model and Tuning the Hyper-parameters
The model has been trained using dataset created by researchers from University of Wollongong. The dataset contains 2000 RGB images (resolution: ). We randomly selected around 1500 images of high variance and the rest utilized for performance evaluation setting up different hyper-parameters. To reduce the time-complexity the size of the image is reduced to
resolution. Since the lane classification comes under binary class, the whole data-set is one-hot encoded
and saved before training the model. The efficiency and the time-complexity of the model depends on the hyper-parameters i.e. epochs and batch-size, where, the batch-size corresponds to the number of images that enter the network at a time and acts as a vital factor to influence the time-complexity; and the number of epochs correspond to the number of times, the whole data-set traverses through the whole network model to avoid under-fitting of the model and enhance the training efficiency of the model. We have trained the network using the open source deep learning framework Keras on an online Odyssey cluster  based GPU. The proposed model has been trained against 100, 300 and 500 images with the number of epochs 30,30,40 respectively and constant batch-size (20). Finally, the whole data-set (1500 images) with 80 and 115 as the number of epochs and constant batch-size to examine the variations in efficiency and time-complexity with the architectural parameters stored as .JSON and the weights at each layer in a hierarchical data format (HDF5).
Iii-C The Dynamic Approach for Segmentation
After training the model with the available data-set of 1500 distinct images over GPU, the weights of the trained model are saved and the model is evaluated to get the confusion matrix metrics as shown in Table.II
and test-accuracy using the loss function (mean-squared error) as given in (3).
Assistive or autonomous navigation of a robotic system is possible only with the real-time data of the system that can be achieved by the knowledge of location, apart from using camera-based real-time data [10, 11]. Google API’s provide us access the real-time data based on proper authentication key. For the present model, geo-location based lane segmentation can be achieved using OSM API  or Google Maps API, of which, OSM is the most accepted API for it’s job. But, the current location of the system, which cannot be obtained using OSM can be achieved using certain Google APIs. Hence, in the current stage, Google geo-locate, Google maps and Google street-view APIs have been used to obtain the street-view image of the system, based on it’s geographical location, as shown in Fig.5. In this approach, Google geo-locate becomes handy to obtain the current location, followed by Google maps API to obtain it’s geographical information, followed by Google Street-View API to obtain the test-image, which is tested on the saved Segnet model architecture to obtain a lane segmented image.
Iii-D Performance Evaluation
Validation accuracy has been evaluated at each and every epoch to know the training progress of the model, using the following equation:
where, N - indicates the count of test data-set
M - indicates the dimension of the input kernel and
f,y - indicate the pixel value of expected and observed images, respectively.
To quantify the accuracy the confusion matrix was obtained through which true positive (n), false positive (n), false negative (n) and true negative (n) are extracted to evaluate the precision (4), recall (5), accuracy (6) and F-measure (7) using the following equations:
|Training data-set||Epochs||Testing Data-set||Test-accuracy|
|Epochs (For 1500 data-set)||Precision||Recall||Test-accuracy||-measure|
Similarly, certain distinctive test data-sets have been evaluated over different hyper-parameters to obtain the over-all accuracy of the model and the results had been tabulated in Table.I. We observed a significant improvement in the results in comparison to the existing methods like SVM, MLPC, as shown in Fig.6 and a few other adaptive algorithms, as in [1, 2].
Iv Results and Discussion
To start with, we used smaller data-set of 100 images tested on the model trained with 300 images (Table I), on a system with specifications of I7 processor, 8GB RAM and 4GB Nvidia G-force GPU. But, the time-complexity hasn’t reduced when compared to the previous models. Hence, the model has been implemented on a 8-core cluster based online (cloud) GPU with low time-complexity. The whole data of validation and training accuracies at each epoch, error data if any, along with finals results can be viewed/accessed online using the credentials. With this support, the model has been trained on data-sets of 500 and 1500 images with their labelled ground-truth data, followed by validation on 200 and 300 images respectively (Table I), and the test-accuracy is obtained using the rest of data-set of 300 and 500 images respectively. As depicted in (Table II), we notice that recall descends with the data-set unlike precision. This is because the classification performed by the model is a pixel-wise classification of occluded images, which is of very high order per image and the reason the saturated recall value for the available data-set is 0.4042 which is not the best for the available data-set due to surplus hardware. At last, both the training and validation accuracies have converged to optimal values (as in Fig.3 and Fig.4). Fig.6 and Fig.8 show distinct set of images with occlusions, in these we can observe certain noises (like salt & pepper) on the results obtained using SVM and MLPC. This is caused by in-efficient training of the model which is due to it’s high time-complexity for small data-sets. As tabulated in [Table I], effective training with larger data-sets and better efficiency ( 97%) is made possible with CNN’s. The images obtained through a set of Google APIs, near the surroundings of our institute (Fig.7) have been tested with the correlation coefficient approximated to value 1.
Most of the recent published work on lane detection is based on marked roads and even the reported work on un-marked-roads do not consider real-time occlusions and are not sufficiently dynamic for real-time implementations. In this paper, SegNet architecture has been compared with other variants to reveal the practical trade-offs involved in designing architectures for segmentation. A dynamic lane detection approach that uses the real-time street images based on the geo-location of the system through Google Street-View API has been proposed in this paper. This approach of interfacing our deep learning model with Google Street-View API is more advantageous compared to OSM, due to it’s ability to obtain current location of the system. The current proposed model has the scope of extension to future driver-assistance systems when trained with multiple number of classes for effective lane and obstacle segmentation. By using random-feature set and training data and evaluating different models and converging them to a single stable/adaptive model can be made possible when trained over GPU’s like Nvidia TITAN V/X. Also, this can be extended to build an electronic cane for blind people using the data obtained from camera attached to the cane.
-  S. Phung, M. Cuong. Le & A. Bouzerdoum, Pedestrian lane detection in unstructured scenes for assistive navigation,” Computer Vision and Image Understanding, vol. 149, pp. 186-196, 2016.
-  Shehan Fernando, Lanka Udawatta, Ben Horan and Pubudu Pathirana, ”Real-time Lane Detection on Suburban Streets using Visual Cue Integration”, International Journal of Advanced Robotic systems, 17 Jan. 2014.
-  J.D.Crisman,C.E.Thorpe, SCARF: A color vision system that tracks roads and intersections,IEEE Trans. on Robotics and Automation, 9(1):49-58, Feb. 1993.
-  D K Savitha and Subrata Rakshit, Gaussian Mixture Model based road signature classification for robot navigation, Emerging Trends in Robotics and Communication Technologies (INTERACT), 2010 ”IEEE International Conference on Human Computer interface”, 31st Jan, 2011.
-  Hao Zhang, Dibo Hou, and Zekui Zhou, A Novel Lane Detection Algorithm Based on Support Vector Machine, Progress In Electromagnetics Research Symposium 2005, Hangzhou, China, August 22-26.
-  Shengyan Zhou, Jianwei Gong and Guangming Xiong, ”Road detection using support vector machine based on online learning and evaluation”,IEEE Intelligent Vehicles Symposium(IV), june 2010.
-  Cristian Tudor Tudorann, Victor-Emil Neagoe, ”A New Neural Network Approach for Visual Autonomous Road Following”, Bucharest.
-  Yan Jiang, Feng Gao , Guoyan Xu, Computer vision based multiple lane detection on Straight road and in curve, Beijing, China, Jan, 2011.
-  Vijay Badrinarayanan, Alex Kendall, Rober toCipolla, Deep Convolutional Encoder-Decoder Architecture for Image Segmentation, 8 Dec. 2015.
Alexandru Gurghian, Tejaswi Koduri, Kyle J. Carey, Vidya N.Murali, ”End-To-End Lane Position Estimation using Deep Neural Networks”,
Computer Vision and Pattern Recognition- IEEE 2016.
-  Jihun Kim , Minho Lee, ”Robust Lane Detection Based On Convolutional Neural Network and Random sample”, SPRINGER International conference paper on(Neural Information Processing in 2014.
-  V.Jonathan Long, Evan Shelhamer,Trevor Darrell,Fully Convolutional Networks for Semantic Segmentation at cs.berkeley.edu.
-  Image processing techniques from a book available on [Online]: ”gitbooks.io/artificial-inteligence/content/imagesegmentation”.
-  Keras Documentation (https://keras.io/) and OpenStreetMap Documentation.
-  Simple Linux utility for resource management (SLURM), Available at [Online]: https://www.rc.fas.harvard.edu/resources/running-jobs/.
-  Fernando Arce a, Erik Zamora b, Gerardo Hernández a, Humberto Sossa, ”Efficient Lane detection based on Artificial Neural networks”, 2nd International Conference on Smart Data and Smart Cities, 4–6 October 2017, Puebla, Mexico.
-  Fisher, Inside Google’s Quest To Popularize Self-Driving Cars, Popular Science article, Available at: http://www.popsci.com/cars/article/2013-09/google-self-driving-car.
-  Hao Li, Fawzi Nashashibi, ”Robust real-time lane detection based on lane mark segment features and general a-prior knowledge”, International Conference on Robotics and Biomimetrics, December 7-11, 2011, Phuket, Thailand.
-  J.E. Shoenfelt, C.C. Tappert, A.J. Goetze, ”Techniques for Efficient Encoding of Features in Pattern Recognition”, Dept.of Electrical Engg. North Carolina State University,IEEE Computer Society, March, 2011.
-  Fernando Arce a, Erik Zamora b, Gerardo Hernández a, Humberto Sossa, ”Efficient Lane detection based on Arificial Neural networks”, 2nd International Conference on Smart Data and Smart Cities, 4–6 October 2017, Puebla, Mexico.
-  Aleks Buczkowski, ”Comparision between Google Maps API and OpenStreetMap (O.S.M) API for geographical location”, http://geoawesomeness.com.
-  Peter Flach, Performance Evaluation in Machine Learning: The Good, The Bad, The Ugly and The Way Forward, Intelligent Systems Laboratory, University of Bristol, 2019, The Alan Turing Institute, London, UK.
-  Liang Xiao, Bin Dai, Daxue Liu, Dawei Zhao, Tao Wu,”Monocular Road Detection Using Structured Random Forest”, International journal for advanced robotics, First published January 1,2016.
-  Péter Szikora Ph.D., Nikolett Madarász Óbuda University, Keleti Faculty of Business and Management, Budapest, Hungary, Self-driving cars — The human side, 14th IEEE International Scientific Conference on Informatics, November, 2017.