Vehicle Local Position Estimation System

03/23/2015 ∙ by Mrinal Haloi, et al. ∙ IIIT Bangalore ERNET India 0

In this paper, a robust vehicle local position estimation with the help of single camera sensor and GPS is presented. A modified Inverse Perspective Mapping, illuminant Invariant techniques and object detection based approach is used to localize the vehicle in the road. Vehicles current lane, its position from road boundary and other cars are used to define its local position. For this purpose Lane markings are detected using a Laplacian edge feature, robust to shadowing. Effect of shadowing and extra sun light are removed using Lab color space and illuminant invariant techniques. Lanes are assumed to be as parabolic model and fitted using robust RANSAC. This method can reliably detect all lanes of the road, estimate lane departure angle and local position of vehicle relative to lanes, road boundary and other cars. Different type of obstacle like pedestrians, vehicles are detected using HOG feature based deformable part model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the increasing number of accident vital life loses,India is one of the most accident prone Country, where according to the NCRB report 135,000 died in 2013 and property damage of $ 20 billion [22].Many a time, accidents and unusual traffic congestion take place due to careless and impatient nature of drivers. In most cases drivers don’t follow lane rules and traffic rules leading to traffic congestion and accidents. For countermeasuring all this problem we need advanced driver assistance system that can assist people drive safely or drive itself safely in case of autonomous driving cars. It always a challenge to make autonomous car that can self sense the environment and drive like a aware human. In some recent works researchers have developed autonomous car, eventhough its not still deployable in real life.

In developed countries like U.S.A,Germany, with the gradual emergence of autonomous driving research, efforts are on to build a smart driving system that can drive more safely without any fatigue, as compared to humans can be programmed to follow traffic rules. Main challenge involves understanding complex traffic patterns and taking real time decision on the basis of visual data from camera and laser sensor data etc. For these automatic cars and also to help human drivers, modelling vehicle local position with respect to the road environment is very relevant for accurate driving, maintaining correct lane and keeping track of front vehicles.

While driving in a road it is very important to keep track of front vehilces, their relative speeds and current lane, this will help avoiding accident and undesireable traffic jam. If we lose our attention while driving in a busy road, we may cause accident, since our car is not equipied with new system that can warn us about various issues.

Fig. 1: Vehicle positioning scenarios

In this work we have addressed various important problems in driving, specifically to enhance driver safety, developed a local vehicle position estimation system with respect to the road environment and combined this infomation with GPS data for getting precise global location for smart driving experience. This formulation will answer about those questions, like which lane i am in?, my position from road boundary?, others vehicles position with respect to me?, is other cars are moving? While driving in the car this information can help the driver for taking smarter and better action in real time to avoid accident. Using vehicle current lane information,other cars movement and its location from front and side vehicles on the road, we accurately modelled its position. Also a robust lane detection system is proposed and used for collecting lane information. Recent state of the art object detection method is used to localize and detect cars on the road. For data collecting we have used wide angle camera sensor to capture surrounding road environment and GPS sensor for global position data.

Rest of the paper is organised as follows, in section 2 we have described related literature, localisation of vehicle with respect to lane ans other cars passing by are elaborated in section 3 and 4 and section 5 describes experiment setup and result obtained.

Ii Related Work

The related literature on autonomous driving and advanced driving assistance system based on Image Processing and Computer Vision using single or multiple camera mounted on car roof top facing the road, including LIDAR , RADAR sensor for detecting object, analysing road sourrounding and 3D modelling of the road environment. Also, in some works driver behaviour understanding using a camera facing the driver is used for drowriness, sleepiness, fatigue detection. We have works on advanced driver assistance system, traffic safety,autonomous vehicle navigation and driver behaviour modelling using mutiple cameras, LIDAR, RADAR sensor etc. These works focus on using image processing and learning based method for lane detection, road segmentation, 3D modelling of road environment (e.g.[2],[4],[5],[6],[7],[21]). Parallax flow computation was used by Baehring et al. for detecting overtaking and close cutting vehicles [8]. For detecting and avoiding collison Radar, LIDAR, camera and omnidirectional camera was used in these works [12],[11],[13]. They focused on detecting using LIDAR sensor data classifying object as static and dynamic and tracking using extended Kalman filter and for getting a wide view of surrounding situation. For detection of forward collision Srinivasa et al. have used forward looking camera and radar data [9].

In some works driver inattentiveness was modelled using fatigue detection, drowsiness, eye tracking, cell phone usage etc. Trivedi et al modelled driver behavior using head movements for detecting driver gaze and distraction, targetting adavanced driver safety [20]. But works on localising vehicles with respect to roads and other cars have not done tll now, since knowing position of the car automatically can be great help for driver, so we have propsed this work.

Iii Localising Vehicle with respect to Lanes

For localising vehicle on the road we estimate some related paramters like its current lane, shape of the road and its position from centerline. To compute this paramters we scan the road environment using wide angle camera sensor and extract lane markers. For lane detection we have proposed a novel method using Lab color space,2nd and 4th order steerable filters and improved Inverse Perspective Mapping. Below we describes our lane markers extraction algorithm.

Fig. 2: Proposed Method

Iii-a Perspective effect

In real world situation if we capture two parallel lines they appers to be converged to some distant points so their nature can’t be understood in images. Road lanes are parallel ,includes both straight and curve roads, for detecting and localising them in images we need to remove the effect of perspective projection using Inverse Perspective Projection [1]. In this work we have presented a modified version of IPM instead of previous which is robust to a distance of 45m. No internal parameter calibration of camera is required for computation, which is a advantage over previous IPM implementation. Suppose Camera location with respect to car coordinates system (Cx,Cy,Cz) where Cz will be the height from ground lane ’h’. Optical axis make an angle known as pitch angle, yaw angle and as half of camera aperture as shown in fig.

Fig. 3: Camera Setup, pitch angle, Yaw angle

To increase computational speed removing uninterested area from image we define horizon line from where our interested area will lie below ”Horizon Limit” as shown in fig.

Fig. 4: Area of interest

For derivation of IPM we will assume perfectly planar road. Image coordinates as (Ix,Iy) and real world coordinates as (x,y,0). If we suppose camera resolution as mxn, the we will get following mapping equation

(1)
(2)
(3)
(4)
(5)

where hz represent start row of image of interest.

Iii-B Feature Extraction

For detecting vetical lines,2D steerable filters [7] are very effective to use,because of their seperability nature computation is faster than other filters. we have combined the result obtained from both 2nd and 4th order filters for extracting final lane markings on the basis of adaptive threshholding.

(6)
(7)
(8)
(9)
(10)

Iii-C Cubic Interpolation and RANSAC

Now for localising vehicle with respect to lanes we have used RANSAC [3] method for identifying and getting potential lane points position from extracted features points for fitting a parablic curves. Maximum of 8 curves can be identified by using our method. In most of the roads except center lane other lines are discontinuous, to get continuous edge to fit a polynomial in those plain areas cubic interpolation are very efficient. Our road model is given in equation (11), where

is offset from vertical coordinate system and a,b,c are paramters.

(11)
Fig. 5: Final feature points and line fitting after cubic interpolation

Iii-D Road Boundary Lane

Only lane lines extraction can’t give overall idea about car position if we don’t know road boundary. Most of time road lane boundary are not so clear and even not paved mainly in indian situation, to cop up with this we need to get road area. For this we have used a 3 class based Gaussian mixture model for segmentation of the road region. Since IPM image’s majority pixels are road part and cars and other obstacles present in the road area becomes noise in the IPM image, so GMM can be used efficiently for this task.Method is applied in illuminant invariant 45m accurate IPM image, this method perform efficiently for this purpose.

Three custers used for segementation comprised of road region, sourrounding natural scenes and road obstacles. We have used predefined mean and covarinaces values for our clusters. For computation of these initial means and covariances we collected seperate pathes from train images for these three category and computed those values. This intialization gives us better result than random k means initializaton. An iterative expectation maxmization based algorithm is used to compute final means, covariances and probability of each clusters in GMM.

At the end a bayesian classification techniques eq(12) is used to classify each pixels in image.

(12)

Here denotes a class i, means and variances and prior probability of the class.

Using vanishing point [5] estimation rest of the road boundary beyond 45m of images, which is not covered in IPM image can be apporximately modeled.

Iii-E Lane Departure Angle

To avoid potential risk of accident or misdriving, lane departure warning is very important. Using information from current position of vehicle with respect to lane specifically offset and optical flow computation this angle can be approximately computed.

(13)
Fig. 6: Lane Departure angle

Iv Localising Vehicle with respect to Other cars

For getting relative knowledge of the car locaion on the road, its location with respect to other car is estimated, like whether our car is behind from detected cars or left or right from those. This additional information will help as deciding factor for possible overtaking and getting more information about lane localisation.

Iv-a Car detection for localisation on road

Each frame of video obtained by our camera sensor is anlysised for detection of cars and other obstacles present in road. For this purpose we have used Histogram of Gradient(HOG) [8] features based deformable part model(DPM), a very effective way to detect human and cars. HOG feature computation is based on gradient magnitude and angle. For descriptor computation, image is divided into 8x8 cell, with 50% overlapping between nearby cells,then further divides the cell into 2x2 blocks for normalization of descriptor to make it illuminant invariant. In case of DPM[15], each object is modeled as composition of its different parts. Training of this model is completely based on HOG feature and decomposition of object into its various parts and final model is obtained using latent SVM. Training phase produces root flter, corresponding to our car and part filters for representating various parts of car. This implementation used HOG pyramid based concept for better accuracy. At higher resolution HOG pyramid capture fine features and object can be detected accurately.This DPM based method can detect car very efficiently under occlusion also. In Fig.7 we have shown model trained using [19], model is depicted using HOG feature representation picture and also its parts and result obtained by using this method.

Fig. 7: Car model and detection result

Once we have detected other cars passing by, we wiil locate their current lane from their detected position on the image. After that we can confer about test vehicles position with respect to those cars.

Iv-B Enviroment Mobility Estimation

After detecting a car, optical flow analysis can give its mobility estimation of those cars, which will provide a potential information about other cars movement. This estimation will also be useful for detecting possible traffic junction and trafic jam. Also our motivation to use optical flow lies mainly on very similar background involves in road environment.We have used optical flow computation described in this paper [16].

Iv-C GPS data Combination

Since most of the time because of reflection from tall buliding in urban areas GPS data are not so accurate, to cop with this problem we will measure all the parameters as described above like vehicle current position with respect to road lane and other cars position. This parameters will give us local vehicles parameters with respect to road and global location from GPS data can give us approximate result about car location. This process can be extended to image based locality estimation by using specified training images.

V Experiment

For showing the performance and reliablity of our algorithm in detecting lane, cars and road environment mobility, we have done a broader experimentation on 440x680 size images in different road condition. Our system was developed using MATLAB package in LINUX based OS with quad core intel i7 machine. For object detection part we have used voc-release library which is a state of art library for detecting object like cars pedestrians etc. We have collected dataset in bangalore city road, with a wide angle camera sensor mounted on our test vehicle’s roof pointing towards road at height 155cm from ground plane at speed of around 45km/h for testing our algorithm accuracy in indian condition, also for checking our algorithm for lane detection, we have used caltech [21] and KIT datatset [18]. This two daset contain image with different condition like, sunny road with shadow, urban road and highway etc.

Fig. 8: IPM image and detected possible lane features
Fig. 9: Result

With the combination of 2nd and 4th order seerable filters to detect edge in horizontal direction, result reduce extra outliers, which help in robust fitting of lane lines and better input to main RANSAC outliers removal and fitting lane lines. Some of the result is shown in Fig. [8] and Fig[9]. In case of some difficult road condition we were unable to detect lane lines in other side of two way roads, this situation is depicted in first row second column image in fig [9].

We are able to detect cars, other obstacles and was able to identify their relative location with respect our test setup, whether those cars are directly in front or in right side or in left side. This information of car location was used to analysis optical flow for getting better sense of their movements. If optical flow are stable then we can say that cars are moving at approximately same speed as our test cars and there are no other obstacles present. But if optical flows are changing rapidly we can confer about slow or high velocity of other cars with respect to our setup.

We have obserevd that using illuminant invariance techniques using Lab color space gives better accuracy over normal RGB images for better detection of lane lines.

Database #Frame #detectedAll #Boundary CorrectRate False Positive CorrectBoundary
KITTI 600 565 591 94.26 % 6.79 % 98.44 %
Caltech 1224 1189 1204 97.14 % 4.17% 98.36 %
Indian Road 1200 1087 1131 90.58 % 12.37% 94.25 %
TABLE I: CorrectRate of ego-lane evaluation(upto 45m) and Road Boundary Detection

In Table .[1] we have given analysis of accuracy obtained in lane detection and road boundary detection in three dataset. Lane detection and road boundary detection are building blocks for better accuracy of our method. It can be observed that this method perform very well in detection of the road lanes. Also we have observed that DPM car detection give very high accuracy, which enable us to locate vehicle position with respect to road. In addition to that GPS data accuracy determine method final accuracy to locate the car on the road.

It is not practical to give a quantitative analysis on vehicle local position estimation. Since number of cars in different roads are different and local vehicle position changes with respect to all this factors.

Vi Conclusion

In this paper, a robust vehicle positioning system is presented using lane feature, car location and GPS data. This work demonstrated the posibility of using local position of vehicle with respect to road for better accuracy of vehicle position. The algorithm especially focus on enhancing safety in normal driving and for autonomous vehicles by keeping track of its local and global position. This specially usefull for urban areas with enormous amount of traffics to avoid from accident and driving safely. We have got considerable accuracy for localising vehicle with respect to lanes even in shadow and sunny road. This system include robust lane feature extaction using illuminant invariant techniques. In future we will develop a safe overtaking system.

References

  • [1] M. Bertozzi and A. Broggi, Real-time lane and obstacle detection on the gold system,Intelligent Vehicles Symposium, Proceedings of the IEEE,1996.
  • [2] Hong Wang and Qiang Chen,Real-time lane detection in various conditions and night cases,Intelligent Transportation Systems, Proceedings of the IEEE,2006
  • [3] M. Fischler and R. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,Communications of the ACM,1981.
  • [4] C. Jung and C. Kelber,Lane following and lane departure using a linear-parabolic model,Image and Vision Computing,2005.
  • [5]

    Hui Kong and J.Y. Audibert and Jean Ponce, Vanishing Point Detection for Road Detection,IEEE International Conference on Computer Vision and Pattern Recognition,2009.

  • [6] J. C. McCall, M. M. Trivedi,Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,IEEE Transactions on Intelligent Transportation Systems,2006.
  • [7] Freeman, W. H., and Adelson, E. H.,The Design and Use of Steerable Filters,IEEE Transactions on Pattern Analysis and Machine Intelligence,1991.
  • [8] N. Dalal and B. Triggs,Histograms of oriented gradients for human detection,In Proceedings of the Conference on Computer Vision and Pattern Recognition,2005.
  • [9] N. Srinivasa et al,A fusion system for real-time forward collision warning in automobiles,IEEE Intell. Transp. Syst.,2003.
  • [10] D. Baehring et al,Detection of close cutin and overtaking vehicles for driver assistance based on planar parallax,IEEE Intell. Veh. Symp,2005.
  • [11] H. Cheng et al,Interactive Road Situation Analysis for Driver Assistance and Safety Warning Systems: Framework and Algorithm,IEEE Transactions on intelligent transportation systems,2007.
  • [12] H. Cheng et al,Enhancing a drivers situation awarness using a global view map,Multimedia and Expo, 2007 IEEE International Conference on, 2007.
  • [13] S. Kannan et al,An Intelligent Driver Assistance System (I-DAS) for Vehicle Safety Modelling using Ontology Approach,International Journal Of UbiComp,2010.
  • [14] P. Dhar et al,Unsafe Driving Detection System using Smartphone as Sensor Platform,International Journal of Enhanced Research in Management & Computer Applications, 2014.
  • [15] Felzenszwalb et al,Object detection with discriminatively trained part based models,PAMI,2009.
  • [16] D. Sun et al,A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles behind Them,International Journal of Computer Vision,2014.
  • [17] S. Park et al,Driver Activity Analysis for Intelligent Vehicles: Issues and Development Framework,In Proc. of IEEE Intelligent Vehicles,2005
  • [18] Jannik Fritsch and Tobias Kuehnl and Andreas Geige,A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms,International Conference on Intelligent Transportation Systems (ITSC),2013.
  • [19] P. Felzenszwalb et al,Discriminatively Trained Deformable Part Models, Release 5,CVPR,2008.
  • [20] A. Tawari et al,Continuous Head Movement Estimator (CoHMET) for Driver Assistance: Issues, Algorithms and On-Road Evaluations,IEEE Transactions on Intelligent Transportation Systems,2014.
  • [21] Mohamed Aly,Real time Detection of Lane Markers in Urban Streets, IEEE Intelligent Vehicles Symposium,2008.
  • [22] http://www.americanbazaaronline.com/2013/08/21/road-to-hell-every-3-7-minutes-death-swoops-in/