With the development of artificial intelligence, autonomous driving systems have become research hot-spots in both academia and industry. As one of the essential modules, ego-lane detection allows the car to properly position itself within the road lanes, which is crucial for subsequent control and planning.
Some typical ego-lane detection results in the KITTI Lane dataset are shown in Fig. 1, where the ego-lane is labeled as green. It can be seen that there are three main tasks for ego-lane detection: left boundary detection, right boundary detection and upper boundary detection. The upper boundary detection is mainly to detect the preceding vehicle, which has been studied by most scholars in recent years and has achieved encouraging results. Therefore, this paper focuses on the left and right boundary detection, that is, lane line detection and road curb detection in KITTI Lane dataset (the road in the KITTI Lane dataset is a two-way road and the vehicle is driving on the right lane).
For lane line detection and road curb detection, one of the most challenging scenarios is missing feature. Fig. 1 shows several typical examples of missing feature in the KITTI Lane dataset, including lane marking wear, lighting changes, and even no visible features. To tackle this challenge, previous methods [1, 2] have been devoted to proposing more effective feature extraction methods to obtain as many features as possible, but they are very time-consuming and cannot deal with extreme scenarios. In addition, model fitting plays an important role when features are partially missing or other objects are interpreted as features . Therefore, this paper focuses on obtaining the compact high-level representation of lane boundaries through model fitting, thereby solving the missing feature problem.
In recent decades of research, various mathematical representation models have been used for model fitting, ranging from simple straight line models to complex spline models. Many researchers prefer model fitting using straight line –, which is a good approximation for the short range and is the most common case in highway scenarios. Although the straight line model is efficient and simple, it will fail in curved roads, so some researchers propose to use a circular arc as lane model [8, 9]. Furthermore, quadratic polynomials [10, 11] and cubic polynomials [1, 12] are also widely used for model fitting in curved situations. In recent years, more and more researchers prefer to use splines for model fitting, including cubic spline , Catmull-Rom spline , B-Splines  and so on. Although mathematical representation models have been widely used for model fitting, their performance is profoundly affected by the quality of lane features. When in some extreme scenarios, a large randomness of the fitted parameters will occur, that is a large shape error between the fitted lane and the real lane.
Nowadays, most autonomous driving systems have access to digital maps that contain rich geometric and semantic information about the environment. This prior information has been proven to have a strong capability to enhance the performance of algorithms in perception , prediction , and motion planning . In this paper, we exploit OpenStreetMap (OSM) , a free online community-driven map to enhance our ego-lane detection algorithm. OSM data is structured using three basic geometric elements: nodes, ways, and relations . Ways are geometric objects like roads, railways, rivers, etc. It a collection of nodes, where the number of nodes is determined by the complexity of the object. Taking the road as an example, a straight road may consist of only two or three points as shown in Fig. 2(a), and a curved road may consist of dozens of points as shown in Fig. 2(b), ensuring the consistency of the OSM road shape and the real lane. Therefore, we use OSM road shape as lane model, which is irrelevant to lane features and robust to a variety of missing feature scenarios.
However, the OSM data is provided by user contributions, so that it is coarse and rife with errors. At the same time, the localization system employed on the vehicle might be noisy. These two problems lead to a position error between the OSM data and the real lane. It can be seen from Fig. 2 that the projection result of the OSM data is close to the lane, so that the error is relatively small. Therefore, we use a search-based optimization method to minimize the distance between the OSM data and the extracted features, which can effectively improve the detection accuracy of the algorithm.
In this paper, we present a novel map-enhanced ego-lane detection framework to address the missing feature problem. Compared with other methods, we employ the OSM road shape as lane model, which is irrelevant to lane features. By minimizing the distance between the OSM road shape and extracted lane features, the position error is eliminated, thereby improving the accuracy of detection results. The main contributions of this paper are as follows:
Exploit the OSM road shape as lane model, which is highly consistent with the real lane shape and irrelevant to lane features.
Propose a search-based optimization method to eliminate the position error between the OSM data and the real lane, thereby improving the detection accuracy.
Propose an efficient ego-lane detection framework being able to run in real-time at a frequency of 20 Hz on a single CPU.
The remainder of this paper is organized as follows. Section II presents the related work of ego-lane detection. In Section III, the proposed map-enhanced ego-lane detection framework is presented in detail. Experimental results are presented in Section IV. Finally, we conclude the paper in Section V.
Ii Related Work
This paper focuses on solving the problem of missing feature by exploiting the OSM road shape as lane model. Therefore, the related work will be carried out in two aspects: lane modeling and map using.
Ii-a Lane modeling
In recent years, lane modeling has played an important role in ego-lane detection, which refers to obtaining a mathematical representation of road lane markings . Different researchers have proposed different lane models. Some people only use simple straight lines, while others prefer to use more complex models, such as polynomial, clothoid, spline, and so on.
The straight line model – is the most commonly used geometric model. It is a good approximation for short distances and is the most common model in highway scenes. To increase the robustness of model fitting, several constraints have been applied additionally, such as parallelism [22, 23], road or lane width , and so on. The straight line model is simple, but its applicability is limited, especially at long distances or curve road.
In [8, 9], curved roads are modeled in the bird’s eye view using circular arc. Generally, the curvature of the road is small and continuous, so the circular arc is a conventional lane model on a ground plane . However, circular arc cannot handle more general curved roads.
Since performing well on more general curved roads, polynomials are also widely used for model fitting, including quadratic polynomial [10, 11], cubic polynomial [1, 12] and so on. But the fitting effect at the connection between a straight lane and a circular curve is limited .
Several researchers [26, 27] assume that the shape of the road as clothoid, which is defined by the initial curvature, the constant curvature change rate, and its total length. Clothoid can be approximated by a third-order polynomial and used to avoid abrupt changes in steering angle when driving from straight to circular roads.
Splines are smooth piecewise polynomial curves, which have been popular in previous studies . Spline based lane model describes a wider range of lane structures, as it can form arbitrary shapes by a different set of control points . Various spline representations have been proposed for lane modeling. In [2, 13], a cubic spline with two to four control points is used for lane modeling. Wang et al. 
presents lane modeling based on Catmull–Rom spline (also known as Overhauster spline), which is a local interpolating spline developed for computer graphics purposes. B-spline was introduced in, which can provide a local approximation of the contour with a small number of control points. Furthermore, nonuniform B-spline was used to construct the left and right lanes of the road . Third-degree Bezier spline is also used to fit the left and right boundaries of the road surface . The lane model was also improved to generate a B-snake model  or parallel-snake model .
Several combination models have also been proposed as lane models. In [34, 35], the image is divided into multiple slices, and lanes in each slice are fitted with straight lines to form a piecewise linear model. Jung et al.  proposed a linear parabolic lane model consisting of a linear function in the near-field and a parabola in the far-field. The nearby straight line model provides the robustness of the model, and the parabola provides the flexibility of the model. Similar to , the combination of a near-range straight-line model and a far-range clothoid model was proposed by .
Ii-B Map using
A map that contains rich geometric and semantic information about the environment is essential for autonomous driving systems. Impressive results have been achieved by introducing maps to perception , prediction , and motion planning . Various map-based methods are also proposed for ego-lane detection.
In , the curvature of the road was first obtained from the GPS position and the digital map, and then it was used to determine whether it was driving on a straight road or a curved road. Different road regions use different lane detection modules, of which straight roads are fitted using linear models and curved roads are fitted using circular arc.
To enhance the performance and robustness of the lane detection system, Möhler et al.  proposed to extract lane width and curvature of upcoming road segments from a digital map to adapt certain configuration parameters. In addition, clothoid is used for model fitting.
Döbert et al.  uses digital map as a guide for lane detection and has been applied in two aspects. One is to widen the map during feature extraction and project it onto the image to form a search area; The other is to project the geometry of the digital map onto the image during the tracking process, thereby defining a guide curve, and then resampling the measurements along the guide curve to estimate the new model. Similar to , the lane model is also a clothoid curve.
As described in Section I, all mathematical representation models have large parameter arbitrariness when features are missing. The methods that using maps still use mathematical representation models, and the problem still exists. In this paper, we use the road shape in OSM data as lane model and transform the fitting problem into a search-based optimization problem. The advantage is that the prior knowledge provided by the map is effectively used, and the problem of missing feature is addressed.
Iii Ego-lane detection
In this section, our map-enhanced ego-lane detection framework will be described in detail. First, we describe the OSM data format and how to obtain the data needed for this paper. Next, we show the preprocessing step, which contains Region of Interest (ROI) selection and lane feature extraction. Finally, we explain how OSM data is used for ego-lane detection.
In 2004, the OpenStreetMap project was started with the goal of creating a free to use and editable map of the world . Different from commercial maps like Google, Navteq, and Teleatlas, OSM is created by volunteers in various ways, for example by supplying GPS tracks using portable GPS devices, labeling objects such as buildings in aerial imagery or by providing local information . By the end of 2019, more than 6 million registered users had been contributed to the project, and more than 7 billion GPS track points had been submitted. The primary reason why we use OSM to assist ego-lane detection is that users can freely access and use under the Open Database License.
OSM data can be accessed via the corresponding website111https://www.openstreetmap.org/ in XML format, and users can download the map of an area of interest by specifying a bounding box. Fig. 3(a) shows the raw OSM data of a sample in the KITTI Lane dataset. OSM data is structured using three basic entities: nodes, ways, and relations. Nodes are geometric elements, which contain the GPS coordinates and a list of available tags. Ways are linear-shaped or area-shaped geometric objects like roads, railways, rivers, etc. They are defined by reference to a list of ordered nodes. Relations are used to form more complicated structures with members of nodes and ways.
The OSM data is in the world coordinate system, but our ego-lane detection algorithm is performed in the vehicle coordinate system. Therefore, the OSM data needs to be transformed to the vehicle coordinate system first. Fig. 3(b) shows the results of our coordinate transformation result. It should be noted that the data beyond the image view is clipped.
OSM data provides rich geometric information. However, for our purposes, the most useful information is the road that ego-car is traveling on, so other geometric information is also clipped. Finally, the OSM data containing only the currently traveling road after being transformed and clipped is shown in Fig. 3(c), which will be used as the lane model later.
Before using the OSM road shape for ego-lane detection, lane line features and road curb features need to be extracted first. In order to improve the speed and accuracy of the algorithm, feature extraction is generally after ROI selection . Therefore, we consider both ROI selection and lane feature extraction as preprocessing in this section.
Iii-B1 ROI Selection
Among all tasks in ego-lane detection, ROI selection is usually the first step performed in most of the previous studies . The main reason for focusing on ROI selection is to increase the computation efficiency and detection accuracy. In this paper, we consider the drivable area to be the ROI. It contains all lane markers and road curbs for feature extraction, and trees, buildings and other objects outside the road can be ignored. Therefore, ROI selection can be redefined as road detection.
Camera is a light-sensitive sensor that is easily affected by illumination and shadows. Although many deep learning methods have greatly improved the performance of image processing in recent years, what has to be considered is computational efficiency, so it is not suitable for the preprocessing step. Unlike the camera, 3D LiDAR is unaffected by illumination and can provide accurate geometric information about the environment. Therefore, we use 3D LiDAR for ROI selection.
To meet the real-time requirements, we project the 3D point cloud data to a 2D range image, which can achieve data compression while retaining neighborhood information. The number of rows of the range image is defined by the number of laser beams of the 3D LiDAR. The KITTI dataset uses Velodyne HDL-64E, so the number of rows is 64. The number of columns of the range image is the horizontal resolution of the 3D LiDAR. We only use field of view that coincides with the camera, so the number of columns is 500. In summary, the size of the range image is , and an example of a range image can be seen in Fig. 4(a).
Based on the assumption that the road is flat and continuous, we do road detection on the range image using the region grow method. As the vehicle is traveling in the forward direction, the road is always located in front of the vehicle. Therefore, seed points are selected as points in front of the vehicle, which are located in the bottom center of the range image. The similarity between pixels is defined by the horizontal slope feature and the vertical slope feature.
For each pixel, the horizontal slope feature is calculated based on neighborhood points in the same laser beam:
where is the position in the 3D LiDAR coordinate system of the pixel, and , are the average value of the neighbors. As shown in Fig. 5(a), the feature value on the ground is close to 0, while the feature value on the road curb is close to infinity, so the horizontal slope feature is used to detect the road curb. At the same time, the features were normalized using the logistic function, and the results are shown in Fig. 4(b).
For each pixel, the vertical slope feature is calculated based on the points on two adjacent laser beams in the same ray direction:
where is a point on the laser beam, and is a point on the laser beam, . As shown in Fig. 5(b), the feature value on the ground is close to 0, while the feature value on the obstacle is close to infinity, so the vertical slope feature is used to detect obstacles. At the same time, the features were normalized using the logistic function, and the results are shown in Fig. 4(c).
After getting the two slope features, the weighted sum is finally calculated:
where and are coefficients of horizontal slope feature and vertical slope feature respectively. Fig. 4(d) shows the weighted sum feature map and it can be seen that obstacles and road curbs are all detected. After obtaining the weighted sum feature map, we use horizontal and vertical region grow to obtain the road area, and the results are shown in Fig. 4(e). Finally, we project road points onto the perspective image and use Delaunay Triangulation  to upsampling the sparse point cloud to obtain the ROI selection result. The ROI selection result is shown in Fig. 4(f).
Iii-B2 Lane Feature Extraction
As described in Section I, lane features in the KITTI Lane dataset are mainly composed of two parts: lane line features and road curb features. In ROI selection, the horizontal slope feature has a good effect on detecting road curbs, so we directly use ROI selection results as road curb features.
Lane line feature extraction aims to extract low-level features from images to support ego-lane detection, such as color, texture, edges . Among them, edges are the most common feature used in ego-lane detection for structured roads . An edge is mathematically defined by the gradient of the intensity function , so we define the gradient as:
where is the image and is the calculated gradient.
In the real driving scenario, lane line may not be parallel to the ego-car, so we use a convolution with the height of 1, which can detect the inclined lane line more stably. Compared with the perspective image that the lane line width becomes smaller as the distance increases, we perform feature extraction on the bird’s eye view image, so that the lane line width is constant and easy to detect. We found that the lane line width generally takes 3 pixels on the bird’s-eye view, so we use a convolution with the width of 9. There is a sharp contrast between the road surface and painted lane lines, so the 3 elements in the middle of the convolution kernel are 2 and the others are . In this way, when there is no lane, the intensity values between pixels are similar, and the gradient is 0; when there is a lane line, the intensity of the three elements in the middle is high, the intensity of the two sides is low, and the gradient is relatively large.
Therefore, when the gradient is greater than , the pixel at the position is marked as the lane. An example of the lane line feature extraction result can be seen in Fig. 6(c). It should be noted that lane line feature extraction is performed on a gray-scale image (shown in Fig. 6(a)), and pixels outside the ROI region are not considered (shown in Fig. 6(b)).
Iii-C Ego-lane Detection
The main goal of this stage is to extract a compact high-level representation of the lane that can be used for decision making . In most papers, mathematical representation models are used as compact high-level representations such as straight lines, parabolas, splines, and so on. In order to fit lane features to these mathematical representation models, Least Squares Method (LSM) and Random Sample Consensus (RANSAC) are widely used. Since mathematical representation models have a large randomness of the fitted parameters when features are missing, we exploit OSM data to enhance ego-lane detection.
As mentioned in the previous section, OSM data is provided by the volunteers, so it is very coarse and rife with errors, which is called OSM data error. At the same time, when projecting the OSM data onto the image, the approximate vehicle pose estimation causes errors in the relative position of the OSM data with respect to the vehicle, which is called vehicle positioning error.
Since we perform ego-lane detection on the 2D image plane, the errors can be eliminated by the rotation parameter and the translation parameter (the x-axis points to the vehicle’s forward direction, while the y-axis is orthogonal to the x-axis and points the left of the vehicle). In real urban scenes, the radius of curvature of the road is relatively large, so the translation parameter can be ignored. Therefore, we only need to consider the parameters and , which represent the lateral offset and the heading offset, respectively. It should be noted that we do lane line detection and road curb detection simultaneously, so the lateral offset consists of two parts: lane line lateral offset and road curb lateral offset .
To estimate these three parameters, we minimize the distance from the detected lane features to the OSM data. Since the OSM road shape consists of a series of points and their connections, the distance from the feature point to the OSM data is equal to the distance from the feature point to its nearest connection:
where is the i-th feature point. and are the two adjacent OSM points closest to the feature point. Therefore, the optimization function is:
where is the number of feature points. is the maximum value of lateral offset, and is the maximum value of heading offset.
The above optimization problem turns out to be very difficult to solve due to looking for the OSM line closest to the feature point. Therefore, we rely on a search-based algorithm to find the optimal approximate solutions. The basic idea is that we iterate through all possible values of these three parameters. After iterating all parameters and obtaining all corresponding distances, we look for the optimal parameters that achieve the smallest distance. However, the time complexity of looping through these three parameters is , which is very time consuming and cannot meet the real-time requirements. Therefore, we optimize these three parameters separately, so that the time complexity is reduced to .
The proposed search-based optimization algorithm is presented in Algorithm 1. The inputs for the algorithm are the features points and OSM points . The outputs from this algorithm are the optimization parameter . As we optimize these three parameters separately, represents the heading offset parameter , lane line lateral offset and road curb lateral offset in each step of optimization. In line 3, all possible values are traversed by given the maximum value of these three parameters. From line 4 to line 13, the distance from the feature point to the OSM data is calculated. The optimal parameters that achieves the smallest distance is selected in line 14 to line 17.
After obtaining the optimization results of the left and right boundaries, we use the vertical slope feature (mentioned in the ROI selection subsection) to detect all obstacles between two boundaries, and take the point closest to the origin as the upper boundary. In this way, the result of ego-lane detection is the area surrounded by these three boundaries.
Fig. 7 shows the ego-lane detection results of the scenarios corresponding to Fig. 2. In (a), the significant lateral error is completely eliminated, and the OSM road shape perfectly coincides with the lane boundaries. It can be seen from (b) that the significant heading error is completely eliminated, except for some slight errors between the OSM road shape and the lane boundaries shape.
|SCRFFPFHGSP ||57.22 %||39.34 %||41.78 %||90.79 %||22.28 %||9.21 %||5 s / CPU|
|SPlane + BL ||69.63 %||73.78 %||80.01 %||61.63 %||2.71 %||38.37 %||2 s / CPU|
|SPRAY ||83.42 %||86.84 %||84.76 %||82.12 %||2.60 %||17.88 %||0.045 s / GPU|
|Up-Conv-Poly ||89.88 %||87.52 %||92.01 %||87.84 %||1.34 %||12.16 %||0.08 s / GPU|
|RBNet ||90.54 %||82.03 %||94.92 %||86.56 %||0.82 %||13.44 %||0.18 s / GPU|
|MANLDF||91.37 %||91.40 %||93.08 %||89.71 %||1.17 %||10.29 %||0.05 s / GPU|
|RoadNet3||91.47 %||91.01 %||91.78 %||91.17 %||1.44 %||8.83 %||0.3 s / GPU|
|NVLaneNet||91.86 %||91.42 %||90.89 %||92.85 %||1.64 %||7.15 %||0.08 s / GPU|
|ours||93.56 %||88.58 %||95.94 %||91.30 %||0.68 %||8.70 %||0.05 s / CPU|
Iv Experimental Evaluation
In order to evaluate the accuracy and real-time performance of our algorithm, we test it on the public KITTI Lane benchmark. All algorithms are implemented in C++, PCL (Point Cloud Library) and OpenCV (Open Source Computer Vision Library), running on a laptop computer with an Intel i5-8265U 1.66 GHz CPU with 8 GB main memory.
Iv-1 KITTI Lane Benchmark
The KITTI Lane benchmark 
is a widely used benchmark for ego-lane detection. 95 training samples and 96 testing samples are collected in various urban scenes with marked lanes were included. The evaluation metrics include maximum F1-measure (MaxF), average precision (AP), precision (PRE), recall (REC), false positive rate (FPR), and false negative rate (FNR), where MaxF is used as the primary metric value for comparison between different methods.
Iv-2 Experiments Setting
For ROI selection, the weighting coefficient of the horizontal slope feature is 0.5, and the weighting coefficient of the vertical slope feature is 0.5.
For lane feature extraction, the gradient threshold is 200.
For ego-lane detection, the maximum lateral error is 100 pixels and the step size is 5 pixels; the maximum heading error is 0.1 radians and the step size is 0.005 radians.
Iv-a Performance Evaluation
We tested our method on the KITTI Lane benchmark and compared it with other state-of-the-art methods, including NVLaneNet, RoadNet3, MANLDF, RBNet , Up-Conv-Poly , SPRAY , SPlane + BL, DH-OCR  and SCRFFPFHGSP . All results are evaluated on the KITTI evaluation server222http://www.cvlibs.net/datasets/kitti/eval_road.php, and the performance of the algorithms is shown in Table I.
The results show that the proposed method achieved 93.56% in the MaxF score, which is 1.70% higher than the previous state-of-the-art method. The improvement of the MaxF score is mainly due to the fact that our PRE can reach 95.94%, and this is precisely because we use OSM road shape as the lane model, which can accurately detect lane boundaries and further achieve higher accuracy.
Iv-B Robustness to Missing Feature
In order to validate the robustness of the proposed algorithm to the missing feature problem, we perform model comparison experiments on the training dataset. Contrast mathematical representation models include straight line, circular arc, quadratic polynomial, and cubic spline. In order to increase the persuasiveness of the experiment, we down-sampled the features with the sampling ratio from 0 % to 100 %. It should be noted that we use the ground truth as lane features, which can avoid the interference of noise and thus ensure the fairness of the experiment. The evaluation metric uses MaxF, and the experimental results are shown in Fig. 8.
It can be seen that the fitting results of all mathematical representation models become worse as the number of features decreases. However, since the OSM road shape is used as the lane model, our method is very robust to missing features. Even if the number of features decreases, the effect remains basically unchanged. At the same time, in some extreme scenarios, such as no visible features (Fig. 1(c)), we directly use OSM road shape as the lane boundary, and the MaxF can reach 88.23 %, while other mathematical representation models cannot handle this scenario.
Since our algorithm is to be used on autonomous driving systems, the less runtime of the algorithm allows systems to get information about the surrounding environment earlier, thereby ensuring the safety of the systems. As shown in Fig. LABEL:fig9, the runtime of our algorithm on both training and testing datasets averages around 50 ms. This is twice as fast as the rotation rate of the 3D LIDAR, so our algorithm can be used safely on autonomous driving systems.
Iv-D Qualitative Results
Some detection results of our method in perspective view and bird’s eye view of the image are shown in Fig. 10 and Fig. 11, respectively. It can be seen that our method is very robust to missing feature caused by lane marking wear, lighting changes, no visible features, etc.
In this study, we employ the OSM road shape as lane model to enhance our ego-lane detection algorithm, which is robust to the challenging scenarios of missing feature. At the same time, to eliminate the position error between the OSM data and the real lane, a search-based optimization algorithm is proposed to improve the accuracy of the algorithm. We validate the algorithm on well-known KITTI Lane benchmark, which achieved state-of-the-art performance in terms of accuracy and real-time performance. In future work, in order to obtain more accurate ego-lane detection results, the OSM road shape error should also be eliminated.
-  D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, “Towards end-to-end lane detection: an instance segmentation approach,” in 2018 IEEE intelligent vehicles symposium (IV). IEEE, 2018, pp. 286–291.
X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: Spatial cnn for traffic scene understanding,” inThirty-Second AAAI Conference on Artificial Intelligence, 2018.
-  S. P. Narote, P. N. Bhujbal, A. S. Narote, and D. M. Dhane, “A review of recent advances in lane detection and departure warning system,” Pattern Recognition, vol. 73, pp. 216–234, 2018.
A. Borkar, M. Hayes, and M. T. Smith, “Robust lane detection and tracking with ransac and kalman filter,” in2009 16th IEEE International Conference on Image Processing (ICIP). IEEE, 2009, pp. 3261–3264.
-  H. Kong, J.-Y. Audibert, and J. Ponce, “Vanishing point detection for road detection,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009, pp. 96–103.
-  A. Borkar, M. Hayes, M. T. Smith, and S. Pankanti, “A layered approach to robust lane detection at night,” in 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems. IEEE, 2009, pp. 51–57.
-  T. Al Smadi, “Real-time lane detection for driver assistance system,” Circuits and Systems, vol. 2014, 2014.
-  M. Nieto, L. Salgado, F. Jaureguizar, and J. Arróspide, “Robust multiple lane road modeling based on perspective analysis,” in 2008 15th IEEE International Conference on Image Processing. IEEE, 2008, pp. 2396–2399.
-  F. Samadzadegan, A. Sarafraz, and M. Tabibi, “Automatic lane detection in image sequences for vision-based navigation purposes,” ISPRS Image Engineering and Vision Metrology, 2006.
-  R. Labayrade, J. Douret, J. Laneurit, and R. Chapuis, “A reliable and robust lane detection system based on the parallel use of three algorithms for driving safety assistance,” IEICE transactions on information and systems, vol. 89, no. 7, pp. 2092–2100, 2006.
-  B. De Brabandere, W. Van Gansbeke, D. Neven, M. Proesmans, and L. Van Gool, “End-to-end lane detection through differentiable least-squares fitting,” arXiv preprint arXiv:1902.00293, 2019.
-  U. Meis, W. Klein, and C. Wiedemann, “A new method for robust far-distance road course estimation in advanced driver assistance systems,” in 13th International IEEE Conference on Intelligent Transportation Systems. IEEE, 2010, pp. 1357–1362.
-  Z. Kim, “Robust lane detection and tracking in challenging scenarios,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 16–26, 2008.
-  Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using catmull-rom spline,” in IEEE International Conference on Intelligent Vehicles, vol. 1, 1998, pp. 51–57.
-  J. Deng, J. Kim, H. Sin, and Y. Han, “Fast lane detection based on the b-spline fitting,” Int. J. Res. Eng. Technol, vol. 2, no. 4, pp. 134–137, 2013.
-  B. Yang, M. Liang, and R. Urtasun, “Hdnet: Exploiting hd maps for 3d object detection,” in Conference on Robot Learning, 2018, pp. 146–155.
-  S. Casas, W. Luo, and R. Urtasun, “Intentnet: Learning to predict intention from raw sensor data,” in Conference on Robot Learning, 2018, pp. 947–956.
-  Y. F. Chen, S.-Y. Liu, M. Liu, J. Miller, and J. P. How, “Motion planning with diffusion maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 1423–1430.
-  M. Haklay and P. Weber, “Openstreetmap: User-generated street maps,” IEEE Pervasive Computing, vol. 7, no. 4, pp. 12–18, 2008.
-  M. Hentschel and B. Wagner, “Autonomous robot navigation based on openstreetmap geodata,” in 13th International IEEE Conference on Intelligent Transportation Systems. IEEE, 2010, pp. 1645–1650.
-  Ð. Obradović, Z. Konjović, E. Pap, and I. J. Rudas, “Linear fuzzy space based road lane model and detection,” Knowledge-Based Systems, vol. 38, pp. 37–47, 2013.
-  A. López, J. Serrat, J. Saludes, C. Canero, F. Lumbreras, and T. Graf, “Ridgeness for detecting lane markings,” in Proceedings of the 2 nd International Workshop on Intelligent Transportation Systems (WIT’05), 2005.
-  E. Adachi, H. Inayoshi, and T. Kurita, “Estimation of lane state from car-mounted camera using multiple-model particle filter based on voting result for one-dimensional parameter space.” in MVA, 2007, pp. 323–326.
-  R. Wang, Y. Xu, Y. Zhao et al., “A vision-based road edge detection algorithm,” in Intelligent Vehicle Symposium, 2002. IEEE, vol. 1. IEEE, 2002, pp. 141–147.
-  S. Yenikaya, G. Yenikaya, and E. Düven, “Keeping the vehicle on the road: A survey on on-road lane detection systems,” ACM Computing Surveys (CSUR), vol. 46, no. 1, pp. 1–43, 2013.
-  H. Loose, U. Franke, and C. Stiller, “Kalman particle filter for lane recognition on rural roads,” in 2009 IEEE Intelligent Vehicles Symposium. IEEE, 2009, pp. 60–65.
-  C. Gackstatter, P. Heinemann, S. Thomas, and G. Klinker, “Stable road lane model based on clothoids,” in Advanced Microsystems for Automotive Applications 2010. Springer, 2010, pp. 133–143.
-  Y. Xing, C. Lv, L. Chen, H. Wang, H. Wang, D. Cao, E. Velenis, and F.-Y. Wang, “Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on acp-based parallel vision,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 3, pp. 645–661, 2018.
-  K. Zhao, M. Meuter, C. Nunn, D. Müller, S. Müller-Schneiders, and J. Pauli, “A novel multi-lane detection and tracking system,” in 2012 IEEE Intelligent Vehicles Symposium. IEEE, 2012, pp. 1084–1089.
-  Q.-B. Truong and B.-R. Lee, “New lane detection algorithm for autonomous vehicles using computer vision,” in 2008 International Conference on Control, Automation and Systems. IEEE, 2008, pp. 1208–1213.
-  Q. Wen, Z. Yang, Y. Song, and P. Jia, “Road boundary detection in complex urban environment based on low-resolution vision,” in 11th Joint International Conference on Information Sciences. Atlantis Press, 2008.
-  Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using b-snake,” Image and Vision computing, vol. 22, no. 4, pp. 269–280, 2004.
-  X. Li, X. Fang, C. Wang, and W. Zhang, “Lane detection and tracking using a parallel-snake approach,” Journal of Intelligent & Robotic Systems, vol. 77, no. 3-4, pp. 597–609, 2015.
-  X. Shi, B. Kong, and F. Zheng, “A new lane detection method based on feature pattern,” in 2009 2nd International Congress on Image and Signal Processing. IEEE, 2009, pp. 1–5.
-  H.-Y. Cheng, B.-S. Jeng, P.-T. Tseng, and K.-C. Fan, “Lane detection with moving vehicles in the traffic scenes,” IEEE Transactions on intelligent transportation systems, vol. 7, no. 4, pp. 571–582, 2006.
-  C. R. Jung and C. R. Kelber, “Lane following and lane departure using a linear-parabolic model,” Image and Vision Computing, vol. 23, no. 13, pp. 1192–1202, 2005.
-  R. Danescu, S. Nedevschi, and T.-B. To, “A stereovision-based lane detector for marked and non-marked urban roads,” in 2007 IEEE International Conference on Intelligent Computer Communication and Processing. IEEE, 2007, pp. 81–88.
-  Y. Jiang, F. Gao, and G. Xu, “Computer vision-based multiple-lane detection on straight road and in a curve,” in 2010 International Conference on Image Analysis and Signal Processing. IEEE, 2010, pp. 114–117.
-  N. Möhler, D. John, and M. Voigtländer, “Lane detection for a situation adaptive lane keeping support system, the safelane system,” in Advanced Microsystems for Automotive Applications 2006. Springer, 2006, pp. 485–500.
-  A. Döbert, A. Linarth, and E. Kollorz, “Map guided lane detection,” in Proceedings of the Embedded World Exhibition and Conference, 2009.
-  M. A. Brubaker, A. Geiger, and R. Urtasun, “Map-based probabilistic visual self-localization,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 652–665, 2015.
-  D.-T. Lee and B. J. Schachter, “Two algorithms for constructing a delaunay triangulation,” International Journal of Computer & Information Sciences, vol. 9, no. 3, pp. 219–242, 1980.
-  A. B. Hillel, R. Lerner, D. Levi, and G. Raz, “Recent progress in road and lane detection: a survey,” Machine vision and applications, vol. 25, no. 3, pp. 727–745, 2014.
-  B. Ma, S. Lakshmanan, and A. Hero, “A robust bayesian multisensor fusion algorithm for joint lane and pavement boundary detection,” in Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), vol. 1. IEEE, 2001, pp. 762–765.
-  A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
Z. Chen and Z. Chen, “Rbnet: A deep neural network for unified road and road boundary detection,” inInternational Conference on Neural Information Processing. Springer, 2017, pp. 677–687.
-  G. Oliveira, W. Burgard, and T. Brox, “Eifficient deep methods for monocular road segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), 2016.
-  T. Kühnl, F. Kummert, and J. Fritsch, “Spatial ray features for real-time ego-lane extraction,” in 2012 15th International IEEE Conference on Intelligent Transportation Systems. IEEE, 2012, pp. 288–293.
-  N. Einecke and J. Eggert, “Block-matching stereo with relaxed fronto-parallel assumption,” in 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE, 2014, pp. 700–705.
-  I. V. Gheorghe, “Semantic segmentation of terrain and road terrain for advanced driver assistance systems,” Ph.D. dissertation, Coventry University, 2015.