Road surface detection and differentiation considering surface damages

06/23/2020 ∙ by Thiago Rateke, et al. ∙ UFSC 0

A challenge still to be overcome in the field of visual perception for vehicle and robotic navigation on heavily damaged and unpaved roads is the task of reliable path and obstacle detection. The vast majority of the researches have as scenario roads in good condition, from developed countries. These works cope with few situations of variation on the road surface and even fewer situations presenting surface damages. In this paper we present an approach for road detection considering variation in surface types, identifying paved and unpaved surfaces and also detecting damage and other information on other road surface that may be relevant to driving safety. We also present a new Ground Truth with image segmentation, used in our approach and that allowed us to evaluate our results. Our results show that it is possible to use passive vision for these purposes, even using images captured with low cost cameras.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 8

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Visual perception for autonomous navigation has been very prominent in recent years, with a lot of papers on both path detection (Rateke et al. (2019)) and obstacle detection (Rateke and von Wangenheim (2018)). Although there are excellent approaches to accomplish these two tasks, the vast majority are created based on images from developed countries, from Europe or North America, with well maintained roads, having few examples of damaged roads, nor dealing with variations in terrain type.

Both variation in surface type, possible road damages or even different surface variations (eg.: speed-bumps) are important for autonomous navigation systems, or even for an Advanced Driver-Assistance System (ADAS), because the surface conditions directly impact the way the vehicle is driven and in the comfort of the users and are also related to the vehicles conservation. For example, a pothole or a water-puddle can damage the vehicle and cause accidents. Detection of surface types, and surface variations may also be useful for Road Infrastructure Departments aiming road maintenance purposes.

A survey (CNT (2018)

) from the Brazilian National Traffic Council presents a road quality evaluation where 37.0% of the roads were classified as “

Regular”, 9.5% as “Bad” and 4.4% as “Poor”. To achieve these results, the study took into consideration the road damage as well as the lack of pavement or asphalt. It is noteworthy that this study focuses on roads under federal or state responsibility, municipal roads were not part of this survey. Another study (Cabral et al. (2018)), from East Timor, showed that 50% of the roads are unpaved in this country.

In Frisoni et al. (2014), an European Union (EU) study concerning road surfaces quality, it is said that compared to the human behavior while driving (eg: lack of attention, aggressively or drunk) the lack of maintenance on the road isn’t the main cause of accidents, but still is one of the causes, because a pothole can damage the vehicle causing the losing of control, it is also directly related to drivers attention, as drivers reaction to variations in surface can cause them to hit other vehicles or obstacles, including pedestrians.

All these studies presents the road evaluation done by human specialists, as a mapping task not as a prediction, and they used a lot of different sensors to assist the evaluation.

The most commonly used datasets for visual perception researches are composed by images depicting good quality and well maintained roads with little variation in terrain type: The CityScapes111https://www.cityscapes-dataset.com/ (Cordts et al. (2016)) is a dataset from Germany. Also from Germany the KITTI222http://www.cvlibs.net/datasets/kitti/raw_data.php is a stereo images dataset (Geiger et al. (2013)). The CamVid333http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ dataset (Brostow et al. (2009)) stems from Cambridge, England. Another dataset, DIPLODOC 444http://tev.fbk.eu/databases/diplodoc-road-stereo-sequence (Zanin et al. (2013)) is from Italy and not as commonly used as the previous three.

There are new datasets that provide images of damaged roads as the RoadDamageDetector555https://github.com/sekilab/RoadDamageDetector/ dataset (Maeda et al. (2018)), from Japan. But the dataset contains only asphalt images. From Brazil, there is the CaRINA666http://www.lrm.icmc.usp.br/dataset dataset (Shinzato et al. (2016)), which presents a scenario in emerging countries, depicting road damages and variations in surface type.

We also provided a dataset acquired in Brazil, the RTK777http://www.lapix.ufsc.br/pesquisas/projeto-veiculo-autonomo/datasets/?lang=en dataset (Rateke et al. (2019)), created through low-cost cameras, including a lot of surface types variations, and damages on the road surface, even on unpaved roads.

There exist several approaches detecting potholes on the road with the use of LIDAR (Light Detection and Ranging) (Kang and Choi (2017) and Yu and Salari (2011)). LIDAR sensors, despite employing relatively safe laser sources, can still cause damage to the human eye (Commission (2001) and STANDARD (2005)). We understand that the pollution caused by LIDAR, which we call laser-smog, isn’t an issue yet, but in a future where smart or autonomous vehicles are a widespread reality, this could be a concern, mainly to pedestrians walking or waiting in the sidewalks near the vehicles at rush hours.

Our goal with this work is to perform road detection with the differentiation of surface variations, in addition to a concomitant surface damage detection. We also aim to show that it is possible to use passive vision (cameras) to detect road damages. It is our understanding that, in dealing with common problems found on roads in emerging countries, but which can also occur in developed countries and are of utmost importance to vehicle behavior, both for the sake of vehicle conservation and especially for safety, we are advancing the state-of-the art of path detection.

The remaining sections of this paper are organized as follows: Section 2 shows some related works that dealt with road detection or road damage detection using passive vision. In Section 3 we provide a brief description about our dataset and our setup, hardware and software, used. Our approach is presented In Section 4, followed by the results in Section 5. Finally, in Section 6 we conclude this paper summarizing the information, with a discussion about our results, besides listing some topics for future works.

2 Related work

In a previous work, we performed a Systematic Literature Review (SLR) on Road Detection research that employed Passive Computer Vision (

Rateke et al. (2019)). In this SLR, although we found several papers, we could not identify any work that dealt with road damage detection or road features other than painted road markings (Ardiyanto and Adji (2017), Jia et al. (2017), Yuan et al. (2015), Zu et al. (2015) and Shi et al. (2016)). Still in the road detection SLR, it is noticed that only 32.1% works did the detection in different surface types (eg: Li et al. (2016), Wang et al. (2016), Nguyen et al. (2017) and Valente and Stanciulescu (2017)). Only 4 showed results with path detection working during the transition between surface types (Guo et al. (2011), Guo et al. (2012), Ososinski and Labrosse (2012) and Cristóforis et al. (2016)), in situations where transitions were not too different, such as focus between asphalt to paved. These approaches did not differentiate the type of surface, i.e. regardless of whether it is asphalt, unpaved or paved, everything was considered as road. Also, none of the approaches was aimed towards detecting other road features, even if there were approaches aimed at path detection on unpaved roads (eg: Wang et al. (2015) and Xiao et al. (2015)).

In another SLR for Road Obstacle Detection (Rateke and von Wangenheim (2018)), we identified a few papers dealing specifically with potholes on the road, using stereo vision techniques, where potholes were defined as a negative obstacles (Herghelegiu et al. (2017), Karunasekera et al. (2017)). However, these works deal only with pothole detection, depending on the depth information of the scene and without other information or features from the road.

In both SLRs it was possible to notice the recent increase of the use of Convolutional Neural Networks (CNN) applications in the tasks of visual perception for navigation. A recent work (

Maeda et al. (2018)), which employs Deep Neural Networks, detects and classifies asphalt damages with Bounding Box markings, but does not deal with surface types variation or damages in other surfaces types.

The authors from a paper where they describe path detection as a guidance for blind people affirm they deal with potholes and water-puddles, but the paper did not present these results (Sövény et al. (2015)). Other two papers (Rankin and Matthies (2008) and Rankin et al. (2010)) deal with larger water-puddle detection in off-road scenarios for navigation purposes with the use of stereo vision methods. They do not deal with other road features, not even the road detection itself.

There are other works that perform the pothole detection, but differ from our scope, because they are with images very close and vertical to the pothole, serving as a mapping task and not for prediction (eg.: Eriksson et al. (2008), Koch and Brilakis (2011), Huidrom et al. (2013), Tedeschi and Benedetto (2017) and Banharnsakun (2017)). Our goal is to identify surface features before the vehicle passes. These approaches also did not dealt with path detection and other path features.

3 Material and methods

In this section we present the dataset we have used in our experiments, the RTK dataset (Rateke et al. (2019)) and the corresponding Ground Truth (GT) that we created to train and validate our experiments, in Subsection 3.1. In Subsection 3.2 we list the hardware and software configurations relevant to our experiment.

3.1 RTK dataset

The RTK dataset is composed by 77547 images captured by a low-cost camera with low-resolution, which could increase the challenge of our application. This dataset was primarily used for a surface type and quality classification application (Rateke et al. (2019)). The dataset is composed of images captured during the daytime, containing lighting variations and plenty of shadows on the road. Contains images of asphalt roads, unpaved, different paved roads and transitions between surface types. Also contains variations in the quality of these roads, with potholes, water-puddles, speed-bumps, patches, etc.

Based on RTK dataset, we create a segmented GT for our experiments. We took 701 images with different situations for label annotation (Figure 1 and Figure 2). In our GT we defined the following classes:

  • Background, everything being unrelated to the road surface;

  • Asphalt, roads with asphalt surface;

  • Paved, different pavements (eg.: Cobblestone);

  • Unpaved, for unpaved roads;

  • Markings, to the road markings;

  • Speed-Bump, for the speed-bumps on the road;

  • Cats-Eye, for the cats-eye found on the road, both on the side and in the center of the path;

  • Storm-Drain, usually at the side edges of the road;

  • Patch, for the various patches found on asphalt road;

  • Water-Puddle, We use this class also for muddy regions;

  • Pothole, for different types and sizes of potholes, no matter if they are on asphalt, paved or unpaved roads;

  • Cracks, Used in different road damages, like ruptures.

Figure 1: Samples images from RTK dataset and our segmented GT
Figure 2: Samples images from RTK dataset and our segmented GT

3.2 Our Setup

We did our experiments on Google Colaboratory (Google Colab), a cloud service based on Jupyter notebooks, which is an interactive web-based environment for document creation. Google Colab also offers free GPU. In our experiments we can use Tesla K80 GPU with 12GB memory and Tesla P100-PCIE GPU with 16GB memory. We also made use of fast.ai library (Howard and others (2018)

), an open source library for deep learning, based on PyTorch.

4 Our approach

Our approach consists in the use of deep learning for road surface semantic segmentation, considering variations in road surface and through low resolution images. That is, label each pixel of the image as the corresponding class as we defined in our GT. To do this, we need to train a Convolutional Neural Network (CNN), and find the best possible configuration that contains reasonable accuracy for all classes.

In our experiments we used the U-NET (Ronneberger et al. (2015)

) for semantic segmentation, which is a convolutional network architecture designed to accomplish the task of fast semantic segmentation in medical imaging, but successfully applied to many other approaches. This architecture has two main parts, one being the encoder, used to extract features from the image, with a traditional CNN structure (including convolution layers and max-pooling layers). The encoder starts with input size image and makes these inputs small. The other part, the decoder, symmetrical at the encoder, makes the process of increasing back to the original size. U-NET accepts input images of any size.

The fast.ai library has different pre-trained models, we did our experiments with resnet34 and resnet50, resnet34 being faster to train and requiring less memory. ResNet are residual CNN models, with skip-connections, allowing sections to be skipped. Each Residual Block has two connections, one connection skipping over that series of convolutions and functions and the other connection going through layers without skipping (He et al. (2016)). This helps maintain important features of the early layers. Resnet is used on the U-NET encoder part, and the fast.ai library will automatically build the symmetrical decoder part of the U-NET.

As data augmentation we used only randomly horizontal rotations and the perspective warping, which is by default in fast.ai library, and we think its the only one necessary and that makes sense in road detection scenario. The data augmentation allows the increase of training images, because besides the original images, also uses the images with the transformations (horizontal rotation and wrap). The fast.ai library also has an option to make the same data augmentation into the original and into the respective mask images.

In our early experiments we realized that the network could reasonably identify the asphalt, paved and unpaved pixels beyond the background, but the other classes resulted in very low accuracy. Due to this disparity, we checked in our GT the number of segmented pixels in each class and found what was already visually noticeable, that there is a great imbalance between the classes. The ratio of each class’s pixels to the total pixels in GT is as follows:

  • Background = 65.86%;

  • Asphalt = 12.90%;

  • Paved = 10.50%;

  • Unpaved = 9.22%;

  • Marking = 0.78%;

  • Speed-Bump = 0.06%;

  • Cats-Eye = 0.02%;

  • Storm-Drain = 0.02%;

  • Patches = 0.22%;

  • Water-Puddle = 0.03%;

  • Pothole = 0.06%;

  • Cracks = 0.33%.

This is a situation that, for example, if we collect more images with potholes, we will also collect even more background pixels or some of the road surface types, maintaining the disparity. Then we added weights to each class to simulate that they all had a similar proportion of pixels in the training step. With the weights the accuracy of the smaller classes has considerably improved, but with the burden of losing accuracy in the asphalt, paved and unpaved classes.

Trying to get more accurate values, after different experiments with different configurations (presented in Section 5) we came up with a solution. First we train a model without the use of weights, in this model the network identifies well the surface patterns but does not have good accuracy for the smaller classes. We then use the previously trained model as the basis for the next model, with the weights in the classes (Figure 3). This configuration generated the most satisfactory and yet balanced accuracy results.

Figure 3: Best results approach

5 Results

As we said in Section 4

we did experiments with different configurations. Starting from smaller datasets to bigger ones, also with no changes at all, and using weights too. In order to make this comparison we tried every model in each configuration during 25 epochs. The configurations we tried are listed as follows:

  • r34-S: ResNet34. Only one model. Without weight;

  • r34-SW: ResNet34. Only one model. With weight;

  • r34-I: ResNet34. Three models, smaller to increasing dataset. First model with images divided by 4, second model with images divided by 2 and third with original sizes. Without weight;

  • r34-IW: ResNet34. Three models, smaller to increasing dataset. First model with images divided by 4, second model with images divided by 2 and third with original sizes. With weight;

  • r34-DW: ResNet34. Two models. First model without weight and second model with weights;

  • r50-S: ResNet50. Only one model. Without weight;

  • r50-SW: ResNet50. Only one model. With weight;

  • r50-I: ResNet50. Three models, smaller to increasing dataset. First model with images divided by 4, second model with images divided by 2 and third with original sizes. Without weight;

  • r50-IW: ResNet50. Three models, smaller to increasing dataset. First model with images divided by 4, second model with images divided by 2 and third with original sizes. With weight;

  • r50-DW: ResNet50. Two models. First model without weight and second model with weights.

The values obtained on each setting are shown in Table 1. Based on these results, we note that networks with only one trained model and weightless (r34-S and r50-S), despite having a good total accuracy, also have the worst results for the smaller classes. The same is true for networks with 3 models with increasing image size (r34-I and r50-I).

The previous models (one model and three models with increasing image size) with the classes weight adjustment, despite having a loss in total accuracy, showed a great improvement in the accuracy of the smaller classes. However they also showed considerable loss in the values of the larger classes, the road surfaces classes (r34-SW, r34-IW, r50-SW and r50-IW).

Both approaches using an initial model with no class weights, followed by the final model with weight in the classes (r34-DW and r50-DW) showed good results for the smaller classes without having a large loss in road surface classes and also maintaining a good total accuracy value.

width= Background Asphalt Paved Unpaved Markings Speed-Bump Cats-Eye Storm-Drain Patchs Water-Puddle Pothole Cracks Total r34-S 98,00% 93,00% 88,00% 74,00% 43,00% 0,00% 0,00% 0,00% 0,01% 0,00% 0,00% 0,07% 94,00% r34-SW 88,00% 71,00% 76,00% 61,00% 71,00% 11,00% 94,00% 98,00% 69,00% 71,00% 66,00% 45,00% 89,00% r34-I 98,00% 89,00% 86,00% 74,00% 50,00% 0,00% 0,00% 0,00% 11,00% 0,00% 0,00% 0,47% 95,00% r34-IW 84,00% 57,00% 67,00% 62,00% 68,00% 75,00% 79,00% 95,00% 62,00% 60,00% 72,00% 38,00% 83,00% r34-DW 92,00% 85,00% 87,00% 79,00% 73,00% 58,00% 86,00% 86,00% 78,00% 89,00% 66,00% 51,00% 92,00% r50-S 97,00% 88,00% 73,00% 72,00% 19,00% 0,00% 0,00% 0,00% 0,00% 0,00% 0,00% 0,00% 87,00% r50-SW 91,00% 78,00% 83,00% 70,00% 71,00% 58,00% 85,00% 95,00% 74,00% 95,00% 59,00% 45,00% 90,00% r50-I 98,00% 94,00% 89,00% 76,00% 67,00% 14,00% 18,00% 20,00% 38,00% 5,03% 20,00% 13,00% 96,00% r50-IW 87,00% 73,00% 72,00% 67,00% 79,00% 85,00% 82,00% 88,00% 70,00% 73,00% 69,00% 49,00% 87,00% r50-DW 90,00% 80,00% 79,00% 76,00% 72,00% 93,00% 93,00% 94,00% 75,00% 97,00% 81,00% 48,00% 93,00%

Table 1: Results from different settings

Continuing and aiming to check if training with more epochs we can get better results. We chose between the two best performing approaches the r34-DW, which showed great results in the experiments and we made a new training, and being with resnet34 requires less of the GPU and we were able to train with a larger batch size and have a faster result. We then trained with 100 epochs for each model from this approach, the first model without weights and then the second using weights. The results obtained are presented in Table 2.

label accuracy
Background 97%
Asphalt 92%
Paved 94%
Unpaved 90%
Marking 93%
Speed-Bump 69%
Cats-Eye 94%
Storm-Drain 97%
Patches 97%
Water-Puddle 90%
Pothole 97%
Cracks 72%
Total 97%
Table 2: Final accuracy results. r34-DW during 100 epochs.

In the confusion matrix presented in Figure

4 we can analyze the results from Table 2 And find out where the major pixel labeling errors occurred. The Cracks class, for example, which had 72% accuracy, had its biggest confusions with the road surface classes. It also presents errors as being in the Patches, Storm-Drain and Pothole classes. The Speed-Bump class, which ended up being the least accurate, had much of the error as Asphalt class, and slightly less as Cracks and Marking. The Water-puddle class that had a hit index of 90% concentrated the errors that occur as being Background. The Marking class has the biggest confusion related to Asphalt. The Unpaved class has errors as being Asphalt.

Figure 4: Confusion matrix for r34-DW

We also present some prediction results from the dataset validation images in the Figure 5, Figure 6 and Figure 7. Comparing the original images (left column), the GT mask images (middle column) and the prediction images (right column). In Figure 5 the first, second and fourth rows show an almost exact result compared to GT. The third and the fifth rows show visible discrepancies from the GT, but still very close to the correct one.

In Figure 6 first and last rows show a very accurate result. While the other rows show slight variations when comparing the results with the GT images. Finally, in Figure 7, The first four rows show results very close to the GT, although with some variations. However, the last row, presents a greatly deforming. Still, it indicates that exists a speed-bump near in front, as well as variations in the surface.

Figure 5: Results in validation dataset. Left: original. Middle: GT mask. Right: Result
Figure 6: Results in validation dataset. Left: original. Middle: GT mask. Right: Result
Figure 7: Results in validation dataset. Left: original. Middle: GT mask. Right: Result

6 Discussion, Conclusions and Future Work

Despite the great advances on the state-of-the art in tasks related to the visual perception for navigation, especially in recent years, considering the advancement of CNN architectures, where applications are beginning to present increasingly accurate and reliable results. We believe that there are still many challenges ahead, especially if we consider the issue of road surface quality and variation.

Although being a more prevalent issue in emerging countries, identifying surface conditions and features is important in any scenario, because it influences how the vehicle should behave or how it should be driven, enabling safer decision making. Surface features detection can be useful for Autonomous Vehicle navigation, for Advanced Driver-Assistance Systems (ADAS), and even for Road Maintenance Departments.

We present in this paper a new GT for road surface features semantic segmentation using low resolution images captured with low cost cameras. We also present the application of deep learning using this GT for the surface features semantic segmentation. We compare different settings and also present the validation values, using the chosen setting, in the results section.

We obtained good results with the configuration based upon first training a model without weights in the classes, and using this model as pre-training for the next model, where we balanced the dataset with weights in the classes, aiming to maintain a good accuracy value for the smaller classes, especially for pothole and water-puddle, without losing much accuracy in the larger surface classes (asphalt, paved and unpaved).

6.1 Future Work

In this work we raised some possibilities and questions while performing our experiments. One question that may result in further experimentation is whether to differentiate more some label categories, instead of employing them always in the same, generalized way, regardless of road surface. For example, we define all pavement damages as Cracks, regardless of whether they occur on asphalt, paved or unpaved roads. The types of damage, however, may have different features on each surface type and perhaps differentiating them may improve the accuracy of the Cracks class, the second lowest accuracy result. In order to improve results, it would be possible to try a finer definition of the general category of cracklike damages: Asphalt Cracks, Paved Cracks or Unpaved Cracks, thus enabling a better differentiation. The same idea goes for the other classes of our dataset.

Another approach, also focusing more on the Cracks category is not only to separate damages by surface types, but also to create new, more specific classes, as we already did to the Pothole and Water-Puddle classes. In Figure 8 we show some situations where the Crack class could be separated in new and more specific classes.

Figure 8: Samples images from RTK dataset. Original (left). GT (right). Which the Crack class in situations that could be in new specific classes.

Concluding, based on the results obtained here, we determined that using only standard resolution monocular video streams we were able to reliably extract useful information on the road surface status. This information could, e.g., be used by an intelligent system, to determine threats such as the distance and position of potholes, water-puddles and other damages and obstacles. Finally, we show that there are still other challenges, such as identifying surface type and surface variations on a rainy day, on a foggy environment or even at night.

Conflicts of interest

The authors declare that there are no conflicts of interest.

References

  • I. Ardiyanto and T. B. Adji (2017) Deep residual coalesced convolutional network for efficient semantic road segmentation. IPSJ Transactions on Computer Vision and Applications 9 (1), pp. 6. External Links: ISSN 1882-6695, Document, Link Cited by: §2.
  • A. Banharnsakun (2017) Hybrid abc-ann for pavement surface distress detection and classification.

    International Journal of Machine Learning and Cybernetics

    8 (2), pp. 699–710.
    Note: doi: https://doi.org/10.1007/s13042-015-0471-1 External Links: ISSN 1868-808X Cited by: §2.
  • G. J. Brostow, J. Fauqueur, and R. Cipolla (2009) Semantic object classes in video: a high-definition ground truth database. Pattern Recognition Letters 30 (2), pp. 88 – 97. Note: doi: https://doi.org/10.1016/j.patrec.2008.04.005 External Links: ISSN 0167-8655 Cited by: §1.
  • F. S. Cabral, M. Pinto, F. A. L. N. Mouzinho, H. Fukai, and S. Tamura (2018)

    An automatic survey system for paved and unpaved road classification and road anomaly detection using smartphone sensor

    .
    In 2018 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), Vol. , pp. 65–70. Note: doi: https://doi.org/10.1109/SOLI.2018.8476788 External Links: ISBN 978-1-5386-4522-2 Cited by: §1.
  • CNT (2018) Pesquisa cnt de rodovias 2016. relatório gerencial. Vol. 20, Confederação Nacional do Transporte (CNT). Serviço Social do Transporte (SEST). Serviço Nacional de Aprendizagem do Transporte (SENAT). Note: url: https://pesquisarodovias.cnt.org.br/Home Cited by: §1.
  • I. E. Commission (2001) Group Safety Publication. International Standard. Cited by: §1.
  • M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223. Note: doi: https://doi.org/10.1109/CVPR.2016.350 External Links: ISSN 1063-6919 Cited by: §1.
  • P. D. Cristóforis, M. A. Nitsche, T. Krajník, and M. Mejail (2016) Real-time monocular image-based path detection. Journal of Real-Time Image Processing 11 (2), pp. 335–348. External Links: ISSN 1861-8219, Document, Link Cited by: §2.
  • J. Eriksson, L. Girod, B. Hull, R. Newton, S. Madden, and H. Balakrishnan (2008) The pothole patrol: using a mobile sensor network for road surface monitoring. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, MobiSys ’08, New York, NY, USA, pp. 29–39. Note: doi: http://doi.acm.org/10.1145/1378600.1378605 External Links: ISBN 978-1-60558-139-2 Cited by: §2.
  • R. Frisoni, F. Dionori, L. Casullo, C. Vollath, L. Devenish, F. Spano, T. Sawicki, S. Carl, R. Lidia, J. Neri, R. Silaghi, A. Stanghellini, and S. D. Gleave (2014) EU road surfaces: economic and safety impact of the lack of regular road maintenance. European Parliament - Policy Department Structural and Cohesion Policies. External Links: Link Cited by: §1.
  • A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. Int. J. Rob. Res. 32 (11), pp. 1231–1237. Note: doi: http://dx.doi.org/10.1177/0278364913491297 External Links: ISSN 0278-3649 Cited by: §1.
  • C. Guo, S. Mita, and D. McAllester (2011) Adaptive non-planar road detection and tracking in challenging environments using segmentation-based markov random field. In 2011 IEEE International Conference on Robotics and Automation, pp. 1172–1179. External Links: Document, ISSN 1050-4729 Cited by: §2.
  • C. Guo, S. Mita, and D. McAllester (2012)

    Robust road detection and tracking in challenging scenarios based on markov random fields with unsupervised learning

    .
    Intelligent Transportation Systems, IEEE Transactions on 13 (3), pp. 1338–1354. External Links: Document, ISSN 1524-9050 Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 770–778. External Links: Document, ISSN Cited by: §4.
  • P. Herghelegiu, A. Burlacu, and S. Caraiman (2017) Negative obstacle detection for wearable assistive devices for visually impaired. In 2017 21st International Conference on System Theory, Control and Computing (ICSTCC), Vol. , pp. 564–570. Note: doi: http://doi.acm.org/10.1109/ICSTCC.2017.8107095 External Links: ISSN Cited by: §2.
  • J. Howard et al. (2018) Fastai. GitHub. Note: https://github.com/fastai/fastai Cited by: §3.2.
  • L. Huidrom, L. K. Das, and S.K. Sud (2013) Method for automated assessment of potholes, cracks and patches from road surface video clips. Procedia - Social and Behavioral Sciences 104, pp. 312 – 321. Note: 2nd Conference of Transportation Research Group of India (2nd CTRG), doi: https://doi.org/10.1016/j.sbspro.2013.11.124 External Links: ISSN 1877-0428 Cited by: §2.
  • B. Jia, J. Chen, and K. Zhang (2017) Recursive drivable road detection with shadows based on two-camera systems. Machine Vision and Applications 28 (5), pp. 509–523. External Links: ISSN 1432-1769, Document, Link Cited by: §2.
  • B. Kang and S. Choi (2017) Pothole detection system using 2d lidar and camera. In 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN), Vol. , pp. 744–746. Note: doi: https://doi.org/10.1109/ICUFN.2017.7993890 External Links: ISSN 2165-8536 Cited by: §1.
  • H. Karunasekera, H. Zhang, T. Xi, and H. Wang (2017) Stereo vision based negative obstacle detection. In 2017 13th IEEE International Conference on Control Automation (ICCA), Vol. , pp. 834–838. Note: doi: https://doi.org/10.1109/ICCA.2017.8003168 External Links: ISSN Cited by: §2.
  • C. Koch and I. Brilakis (2011) Pothole detection in asphalt pavement images. Advanced Engineering Informatics 25 (3), pp. 507 – 515. Note: Special Section: Engineering informatics in port operations and logistics, doi: https://doi.org/10.1016/j.aei.2011.01.002 External Links: ISSN 1474-0346 Cited by: §2.
  • Y. Li, W. Ding, X. Zhang, and Z. Ju (2016) Road detection algorithm for autonomous navigation systems based on dark channel prior and vanishing point in complex road scenes. Robotics and Autonomous Systems 85 (), pp. 1 – 11. Note: External Links: ISSN 0921-8890, Document, Link Cited by: §2.
  • H. Maeda, Y. Sekimoto, T. Seto, T. Kashiyama, and H. Omata (2018) Road damage detection and classification using deep neural networks with smartphone images. Computer-Aided Civil and Infrastructure Engineering 33 (12), pp. 1127–1141. Note: doi: https://doi.org/10.1111/mice.12387 External Links: https://onlinelibrary.wiley.com/doi/pdf/10.1111/mice.12387 Cited by: §1, §2.
  • L. Nguyen, S. L. Phung, and A. Bouzerdoum (2017) Enhanced pixel-wise voting for image vanishing point detection in road scenes. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 1852–1856. External Links: Document, ISSN Cited by: §2.
  • M. Ososinski and F. Labrosse (2012) Real-time autonomous colour-based following of ill-defined roads. In Advances in Autonomous Robotics: Joint Proceedings of the 13th Annual TAROS Conference and the 15th Annual FIRA RoboWorld Congress, Bristol, UK, August 20-23, 2012, G. Herrmann, M. Studley, M. Pearson, A. Conn, C. Melhuish, M. Witkowski, J. Kim, and P. Vadakkepat (Eds.), pp. 366–376. External Links: ISBN 978-3-642-32527-4, Document, Link Cited by: §2.
  • A. Rankin, T. Ivanov, and S. Brennan (2010) Evaluating the performance of unmanned ground vehicle water detection. In Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop, PerMIS ’10, New York, NY, USA, pp. 305–311. External Links: ISBN 978-1-4503-0290-6, Link, Document Cited by: §2.
  • A. L. Rankin and L. H. Matthies (2008) Daytime mud detection for unmanned ground vehicle autonomous navigation. Cited by: §2.
  • T. Rateke, K. A. Justen, V. F. Chiarella, A. C. Sobieranski, E. Comunello, and A. V. Wangenheim (2019) Passive vision region-based road detection: a literature review. ACM Comput. Surv. 52 (2), pp. 31:1–31:34. Note: doi: http://doi.acm.org/10.1145/3311951 External Links: ISSN 0360-0300 Cited by: §1, §2.
  • T. Rateke, K. A. Justen, and A. von Wangenheim (2019) Road surface classification with images captured from low-cost cameras – road traversing knowledge (rtk) dataset. Revista de Informática Teórica e Aplicada (RITA). Cited by: §1, §3.1, §3.
  • T. Rateke and A. von Wangenheim (2018) Systematic literature review for passive vision road obstacle detection. Technical report Brazilian Institute for Digital Convergence - INCoD. External Links: Document Cited by: §1, §2.
  • O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Eds.), Cham, pp. 234–241. External Links: ISBN 978-3-319-24574-4 Cited by: §4.
  • J. Shi, F. Fu, Y. Wang, and J. Wang (2016) A novel path segmentation method for autonomous road following. In 2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp. 1–6. Note: doi: http://doi.acm.org/10.1109/ICSPCC.2016.7753701 Cited by: §2.
  • P. Y. Shinzato, T. C. dos Santos, L. A. Rosero, D. A. Ridel, C. M. Massera, F. Alencar, M. P. Batista, A. Y. Hata, F. S. Osório, and D. F. Wolf (2016) CaRINA dataset: an emerging-country urban scenario benchmark for road detection systems. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Vol. , pp. 41–46. Note: doi: http://dx.doi.org/10.1109/ITSC.2016.7795529 External Links: ISSN Cited by: §1.
  • B. Sövény, G. Kovács, and Z. T. Kardkovács (2015) Blind guide. Journal on Multimodal User Interfaces 9 (4), pp. 287–297. External Links: ISSN 1783-8738, Document, Link Cited by: §2.
  • A. N. STANDARD (2005) American national standard for safe use of lasers outdoors. Orlando, FL. Cited by: §1.
  • A. Tedeschi and F. Benedetto (2017) A real-time automatic pavement crack and pothole recognition system for mobile android-based devices. Advanced Engineering Informatics 32, pp. 11 – 25. Note: doi: https://doi.org/10.1016/j.aei.2016.12.004 External Links: ISSN 1474-0346 Cited by: §2.
  • M. Valente and B. Stanciulescu (2017) Real-time method for general road segmentation. In 2017 IEEE Intelligent Vehicles Symposium (IV), Vol. , pp. 443–447. Note: doi: https://doi.org/10.1109/IVS.2017.7995758 External Links: ISSN Cited by: §2.
  • H. Wang, M. Ren, and J. Yang (2016) Capitalizing on the boundary ratio prior for road detection. Multimedia Tools and Applications 75 (19), pp. 11999–12019. Note: doi: https://doi.org/10.1007/s11042-016-3280-y External Links: ISSN 1573-7721 Cited by: §2.
  • J. Wang, S. Sun, and X. Zhao (2015) Unstructured road detection and path tracking for tracked mobile robot. In 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 535–539. External Links: Document Cited by: §2.
  • L. Xiao, B. Dai, T. Hu, and T. Wu (2015) Fast unstructured road detection and tracking from monocular video. In The 27th Chinese Control and Decision Conference (2015 CCDC), pp. 3974–3980. External Links: Document, ISSN 1948-9439 Cited by: §2.
  • X. Yu and E. Salari (2011) Pavement pothole detection and severity measurement using laser imaging. In 2011 IEEE INTERNATIONAL CONFERENCE ON ELECTRO/INFORMATION TECHNOLOGY, Vol. , pp. 1–5. Note: doi: http://doi.acm.org/10.1109/EIT.2011.5978573 External Links: ISSN 2154-0373 Cited by: §1.
  • Y. Yuan, Z. Jiang, and Q. Wang (2015) Video-based road detection via online structural learning. Neurocomputing 168 (), pp. 336 – 347. Note: doi: http://dx.doi.org/10.1016/j.neucom.2015.05.092 External Links: ISSN 0925-2312 Cited by: §2.
  • M. Zanin, S. Messelodi, C. M. Modena, and F. B. Kessler (2013) Diplodoc road stereo sequence. Note: url: https://tev.fbk.eu/databases/diplodoc-road-stereo-sequence Cited by: §1.
  • Z. Zu, Y. Hou, D. Cui, and J. Xue (2015)

    Real-time road detection with image texture analysis-based vanishing point estimation

    .
    In 2015 IEEE International Conference on Progress in Informatics and Computing (PIC), pp. 454–457. Note: doi: http://dx.doi.org/10.1109/PIC.2015.7489888 Cited by: §2.