Log In Sign Up

A Robust Lane Detection and Departure Warning System

In this work, we have developed a robust lane detection and departure warning technique. Our system is based on single camera sensor. For lane detection a modified Inverse Perspective Mapping using only a few extrinsic camera parameters and illuminant Invariant techniques is used. Lane markings are represented using a combination of 2nd and 4th order steerable filters, robust to shadowing. Effect of shadowing and extra sun light are removed using Lab color space, and illuminant invariant representation. Lanes are assumed to be cubic curves and fitted using robust RANSAC. This method can reliably detect lanes of the road and its boundary. This method has been experimented in Indian road conditions under different challenging situations and the result obtained were very good. For lane departure angle an optical flow based method were used.


page 1

page 3

page 5


Vehicle Local Position Estimation System

In this paper, a robust vehicle local position estimation with the help ...

Vision-Based Lane Detection and Tracking under Different Challenging Environmental Conditions

Driving is very challenging when the visibility of a road lane marking i...

Semi-Local 3D Lane Detection and Uncertainty Estimation

We propose a novel camera-based DNN method for 3D lane detection with un...

DA-LMR: A Robust Lane Markings Representation for Data Association Methods

While complete localization approaches are widely studied in the literat...

Detecting Lane and Road Markings at A Distance with Perspective Transformer Layers

Accurate detection of lane and road markings is a task of great importan...

3D-LaneNet: end-to-end 3D multiple lane detection

We introduce a network that directly predicts the 3D layout of lanes in ...

I Introduction

With the increasing number of vital life loses in accidents, India is one of the most accident prone country, where according to the NCRB report 135,000 died in 2013 and property damage of $ 20 billion [1]. Many a time, accidents and unusual traffic congestion take place due to careless and impatient nature of drivers. In most cases drivers don’t follow lane rules, traffic rules leading to traffic congestion and accidents. For counter measuring all these problems, advanced driver assistance system is needed that can assist people drive safely or drive itself safely in case of autonomous driving cars. It is quite a challenge to make autonomous car that can self-sense the environment and drive like an aware human. In some recent works researchers have developed autonomous car, even though it’s not still deployable in real life.

Fig. 1: Vehicle lane detection scenarios

In developed countries such as U.S.A and Germany, with the gradual emergence of autonomous driving research, efforts are on to build a smart driving system that can drive more safely without any fatigue, as compared to humans can be programmed to follow traffic rules. In this context the main challenge involves understanding complex traffic patterns and taking real time decision on the basis of visual data from camera and laser sensors. For these automatic cars, modelling vehicle current lane with respect to the road environment is very relevant for accurate driving, maintaining correct lane and keeping track of front vehicles. Apart from this estimating departure angle from the current lane for possible overtaking and taking turn in curved road is also important. Fig. 

1 shows a sample vehicle lane detection scenario.

Whether automatic cars will become a future reality or not, Advanced Driver Assistance Systems (ADAS) are increasingly deployed in modern cars. Road lane, road boundary and departure angle estimation are crucial modules of ADAS. Such modules facilitate and validate human judgment. They can also be used to prepare the automobile in case of an emergency situation.

In this work, we have developed a robust lane detection system using a modified Inverse Perspective Mapping (IPM) algorithm where camera intrinsic parameters are not required. This IPM formulation can give up to 45m of accurate road view. By using IPM a wide area of unwanted region is removed and this will help to locate lane features accurately. Also a novel lane departure warning system is developed. In addition to that another contribution of our method is that it can detect road boundary lane even if there are no lane marking, suitable for developing countries context. For this purpose wide angle camera sensor mounted on car roof was used to capture surrounding road environment.

Rest of the paper is organised as follows, Section II describes related literature. In Section III elaborate detection of lane and computation of departure angle method, and in Section IV our experimental setup. Finally we conclude in Section V.

Ii Related Work

The related literature on autonomous driving and advanced driving assistance system are based on using single or multiple camera facing the road to detect lane feature. Sensors specifically LIDAR, RADAR etc. are used for detecting object and 3D modelling of the road environment. Also in some works driver behaviour understanding using a camera facing the driver is used for drowsiness, sleepiness, and fatigue detection.

For lane detection and departure warning system many works target urban and countryside area [2],[3], [4], [5], [6], [7]

. Most of these works have used Image Processing and Machine Learning based approach for defining and extracting features for lanes. Some of the works have straight and planer road assumption and only works in highway. This happens because of assumption of strong lane marking and low traffics. In lane tracking kalman filter and particle filter [8] based approach is used. Another work based on vanishing point detection [9] gives a good result in countryside road.

Other related works include methods for road segmentation, traffic signs detection and recognition, 3D modelling of road environment (e.g. [10],[13],[14]). Parallax flow computation was used by Baehring et al. for detecting overtaking and close cutting vehicles [11]. For detecting and avoiding Collison, Hong et al. had used Radar, LIDAR, camera and omnidirectional camera respectively [12], [16]

. They focused on detecting using LIDAR sensor data classifying object as static and dynamic and tracking using extended Kalman filter and for getting a wide view of surrounding situation. For detection of forward collision Srinivasa et al. have used forward looking camera and radar data


In some works driver behaviour and inattentiveness was modelled using fatigue detection, drowsiness, eye tracking, visual analysis etc. Ji et al. [17] presented tracking method for eye, gaze and face pose and Hu et al. [18] used SVM based method for driver drowsiness detection. Driver behavior was modelled using visual analysis of surrounding environment in our previous work [15].

Iii Our approach

We summarize our system in Fig. 2, as to how we estimate road lanes, road boundary and departure angle.

Iii-a Lane detection

For localising vehicle on the road estimation of some related parameters like its current lane, shape of the road and its position from centreline. To compute this parameters the road environment is scanned using wide angle camera sensor and extract lane markers. For lane detection we have proposed a novel method using Lab colour space, 2nd and 4th order steerable filters and improved Inverse Perspective Mapping. Below our lane markers extraction algorithm is described.

Fig. 2: Overview of Method

A1. Perspective effect
In real world situation if two parallel lines are captured they appears to be converged to some distant points so their nature can’t be understood in images. Road lanes are parallel, for detecting and localising them in images the effect of perspective projection is removed using Inverse Perspective Projection. In this work we have presented a modified version of IPM, which is robust to a distance of 45m. No internal parameter calibration of camera is required for computation in comparison to other algorithm [2], [14]. Suppose Camera location with respect to car coordinates system (Cx,Cy,Cz) where Cz will be the height from ground lane ’h’. Optical axis make an angle known as pitch angle, yaw angle and as half of camera aperture as shown in Fig. 3.

Fig. 3: Camera Setup, pitch angle, Yaw angle

To increase computational speed removing uninterested area from image we define horizon line from where our interested area will lie below ”Horizon Limit” as shown in Fig. 4. After applying this horizon limit, inverse perspective mapping will be applied to this modified image.

Fig. 4: Area of interest

For derivation of IPM road is assumed to be perfectly planer. Because of this assumption different type of obstacles present in the road which does not lie in the road are deformed and seen as some noisy area in IPM image. Denote image coordinates as and real world coordinates as . Suppose camera resolution as , then the following image to world frame mapping equation is obtained from derivation


where represent start row of image of interest.

A2. Feature Extraction

For detecting lines and curves, 2D steerable filters [19] are very effective to use, because gradient changes due to color variation of road and lanes is effectively captured. In addition to that, due to their separability nature, computation is faster than other filters. The result obtained from both 2nd and 4th order filters are combined to extract final lane markings on the basis of adaptive thresholding, this depends also values of gradient angle to supress edge in unwanted direction. In Fig. 5 filters used in this method is shown. Filter kernel used are represented by Eq. (6) to (10).

Fig. 5: Response at 0,45 and 90 degree

A3. Cubic Interpolation and RANSAC

RANSAC method was used for identifying and getting potential lane points position from extracted features points for fitting a parabolic curves. RANSAC is an iterative algorithm, it removes outliers and fit defined model to data point. Maximum of 8 curves (lane lines) can be identified by using this proposed method. In most of the roads except centre lane other lines are discontinuous, to get continuous edge to fit a polynomial in those plain areas cubic interpolation are very efficient. Cubic interpolation used in these setup depends on gradient value and direction. Our road model is given in Eq. (11), where

is offset from vertical coordinate system and ,, are lane parameters .


Iii-B Road Boundary Lane

Only lane line extraction can’t give an overall idea about car position if road boundaries are not known. Most of the time road lane boundary are not so clear and even not paved mainly in Indian situation, to cope up with this we need to segment the road area. For this 3 class based Gaussian mixture model was used for segmentation of the road region. Since IPM image’s majority pixels are road part and cars and other obstacles present in the road area becomes noise in the IPM image, so GMM can be used efficiently for this task. Method is applied in illuminant invariant 45m accurate IPM image, this method perform efficiently for this purpose.

Three clusters used for segmentation comprise of road region, surrounding natural scenes and road obstacles. Predefined mean and covariance values for our clusters was used. For computation of these initial means and covariance, we collected separate patches from train images for these three categories and computed those values. This initialization gives us better result than random k means initialization. An iterative expectation maximization based algorithm is used to compute final means, covariance and probability of each clusters in GMM.

At the end a Bayesian classification techniques Eq. (12) is used to classify each pixels in image.


Here denotes a class i, means and variances and prior probability of the class.

Using vanishing point estimation rest of the road boundary beyond 45m of images, which is not covered in IPM image can be approximately modelled [9].

Iii-C Lane Departure Angle

To avoid potential risk of accident or hazardous driving and maintaining proper lane, departure warning is very important. Using information from current position of vehicle with respect to lane specifically offset and optical flow computation this angle can be approximately computed [20]. Horizontal optical flow is a strong feature for knowing its unwanted horizontal velocity, which may be a clear indication of lane changing or overtaking, except the case for curved road region. Fig. 6 and Fig. 7 illustrate the concept.

Fig. 6: Lane Deperture angle Idea
Fig. 7: Lane Deperture warning from optical flow
Fig. 8: Image, IPM view and extracted lane features in KITTI dataset
Fig. 9: IPM image and detected lanes in indian road

Iv Experimental Results

This proposed method has been experimented using different challenging datasets to analyze the performance and reliability. Experimentation on image sizes of and obtained from video of moving camera in different road environment were carried out. This system was developed using C++ language in LINUX intel quad core i7 machine. We have collected a dataset in Bangalore city road condition, with a wide angle camera sensor mounted on our test vehicle’s roof at height 155cm from ground plane at speed of around 45km/h pointing towards the forward road plane at an angle of from horizontal line, for testing our algorithm accuracy in Indian condition. This dataset contains frames with varying luminance, shadows, curved lane lines and road without boundary lane lines. Also standard datasets KITTI [21] and Caltech [14] was used for checking this algorithm in broad regions. These two datasets contain images with different condition like, sunny road with shadow, urban road with traffic and highway etc.

Database #Frame #detectedAll #Boundary CorrectRate False Positive CorrectBoundary
KITTI 600 565 591 94.26 % 6.79 % 98.44 %
Caltech 1224 1189 1204 97.14 % 4.17% 98.36 %
Indian Road 1200 1087 1131 90.58 % 12.37% 94.25 %
TABLE I: CorrectRate of ego-lane evaluation(upto 45m) and Road Boundary Detection
Method PRE-20 PRE-30 PRE-40 Runtime Environment
SPRAY [22] 97.51% 96.92 % 88.76 % 0.045 s NVIDIA GTX 580 (Python + OpenCL)
Our Method 95.17% 95.17% 93.76% 0.029s 4 core @ 2.3 Ghz(C++)
BL [21] 95.65 % 94.47 % 87.23 % 0.02 s 1 core @ 2.5 Ghz (Python)
SPlane + BL [23] 95.48 % 92.34 % 79.79% 2 s 1 core @ 3.0 Ghz (C/C++)
TABLE II: Comparison with other Methods in KITTI dataset

With the combination of 2nd and 4th order steerable filters to detect edge in horizontal direction and vertical direction, results in the reduction of extra outliers, which help in robust fitting of lane lines and better input features to main RANSAC outliers removal and fitting lane lines. In addition to that horizontal lines on the road are associated with the detection of pedestrian crossing. This method is also capable of generating pedestrian crossing warning by detecting lines in horizontal direction. First row of Fig. 8 shows the original images from KITTI dataset and second row depicts corresponding IPM image overlaid with extracted possible features points. In Fig. 9 first row depicts the IPM images of indian road dataset and their corresponding final lane markings obtained after applying the algorithm is shown in second row. This method can detect road boundary even if there are no boundary lane marking, which is very useful for Indian road conditions. From Fig. 8, it can be seen that this method does not make any assumption of straight road, can detect lane features even if road is not straight. We have tested our algorithm in Indian road condition and obtained acceptable accuracy, also it can detect road boundary very well and the road region in road where there are no lane marking exist.

We have observed that using illuminant invariance techniques, separating luminance and colour parts of image gives better accuracy over normal RGB images for better detection of lane lines. This setting are useful for various illumination changes due to shadows, raining, fogs etc.

In Table 1 we have given an analysis of precision (correct rate), false positive and correct boundary, obtained in lane detection and road boundary detection in three dataset. Pixel wise evaluation was used for the computation of these parameters. Lane detection and road boundary detection are building blocks for better accuracy of this method. Also robustness due to lack of boundary lane and shadow can be observed in Fig. 9. A comprehensive result with comparisons with existing state-of-the-art methods were shown in Table 2, this analysis was carried out in KITTI dataset. It can be seen that precision of our lane detection method at 40m range outperform other existing methods.


Where TP is true positive and FP is false positive. Also PRE-20, PRE-30 and PRE-40 are precision in 20m, 30m and 40m of IPM image respectively. This precision measure with respect to distance are important are important for lane detection efficiency and depends on IPM image computation. For lane departure warning system, optical flow computation result are shown in Fig. 7. Optical flow near to vehicles is used for estimating its possible horizontal velocity.

V Conclusion

In this paper, a robust lane detection system is presented using steerable edge features and RANSAC polynomial fitting. We have got considerable accuracy for lane detection and warning system even in shadow and sunny road. This algorithm especially focus on enhancing safety in normal driving and for autonomous vehicles by keeping track of its proper lane, also addresses the problem of Indian road condition by defining new boundary detection method. In future, we will implement a probabilistic lane tracking system for reducing per frame processing cost.


  • [1],” 2013.
  • [2] M. Bertozzi and A. Broggi, “Real-time lane and obstacle detection on the gold system.” Intelligent Vehicles Symposium, Proceedings of the IEEE, vol. 19-20, pp. 213–218,1996.
  • [3] H. Wang and Q. Chen, “Real-time lane detection in various conditions and night cases.” Intelligent Transportation Systems, Proceedings of the IEEE, 2006.
  • [4] M. Fischler and R. Bolles, “Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography.” Communications of the ACM.
  • [5] C. Jung and C. Kelber, “Lane following and lane departure using a linear-parabolic model.” Image and Vision Computing.
  • [6] M. M. Trivedi. J. C. McCall, “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation.” IEEE Transactions on Intelligent Transportation Systems, vol. 7, pp. 20–37, 2006.
  • [7] C. T. J.G. Wang and S. Chen, “Applying fuzzy method to vision-based lane detection and departure warning system.” Expert Systems With Aplications, 2010.
  • [8] B. Southall and C. Taylor, “Stochastic road shape estimation,” ICCV, 2001.
  • [9]

    H. Kong, J. Audibert, and J. Ponce, “Vanishing point detection for road detection.” IEEE International Conference on Computer Vision and Pattern Recognition, 2009.

  • [10] M. Haloi, ”A novel pLSA based Traffic Signs Classification System”, eprint arXiv:1503.06643, 2015.
  • [11] D. Baehring et al, “Detection of close cutin and overtaking vehicles for driver assistance based on planar parallax,” IEEE Intell. Veh. Symp., pp. 261–266, June 2005.
  • [12] H. Cheng et al, “Interactive road situation analysis for driver assistance and safety warning systems: Framework and algorithms,” IEEE Transactions on intelligent transportation systems, vol. 8, March 2007.
  • [13] N. Srinivasa et al, “A fusion system for real-time forward collision warning in automobiles,” IEEE Intell. Transp. Syst., vol. 1, pp. 457–462, 2003.
  • [14] M. Aly, “Real time detection of lane markers in urban streets,” IEEE Intelligent Vehicles Symposium, 2008.
  • [15] Haloi, M., Jayagopi, D. B. (2014, October). Characterizing driving behavior using automatic visual analysis. In Proceedings of the 6th IBM Collaborative Academia Research Exchange Conference (I-CARE) on I-CARE 2014 pp. 1-4. ACM.
  • [16] H. Cheng et al, “Enhancing a drivers situation awarness using a global view map,” Multimedia and Expo, 2007 IEEE International Conference on, pp. 1019–1022, July 2007.
  • [17] Q. Ji et al, “Real-time eye, gaze, and face pose tracking for monitoring driver vigilance.” Real-Time Imaging,Elsevier.
  • [18]

    S. Hu et al, “Driver drowsiness detection with eyelid related parameters by support vector machine,” Expert Systems with Applications,Elsevier.

  • [19] W. H. Freeman and E. H. Adelson, “The design and use of steerable filters.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, pp. 891–906, 1991.
  • [20] D. Sun et al, “A quantitative analysis of current practices in optical flow estimation and the principles behind them,” International Journal of Computer Vision, pp. 115–137, 2014.
  • [21] Jannik Fritsch and Tobias Kuehnl and Andreas Geige,A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms,International Conference on Intelligent Transportation Systems (ITSC),2013.
  • [22] T. Kuehnl, F. Kummert and J. Fritsch: Spatial Ray Features for Real-Time Ego-Lane Extraction. Proc. IEEE Intelligent Transportation Systems 2012.
  • [23] N. Einecke and J. Eggert: Block-Matching Stereo with Relaxed Fronto-Parallel Assumption. IV 2014