Road segmentation aims to classify each pixel of an image as drivable or undrivable and provides the drivable region to other modules in self-driving system for safe navigation. Obviously, both high efficiency and reliability are necessary for road segmentation. In recent years, many algorithms have been proposed to segment road based on vision sensors, i.e., 3D LiDAR, monocular and stereo camera. Compared to the works using low-cost monocular[25, 30, 28] and expensive LiDAR sensors [2, 14, 6], the road segmentation methods using affordable stereo camera [10, 32, 31], namely RGB-D based methods, achieve a better trade-off between cost and reliability. Thus, they are valued by the community recently.
However, the prevalence of RGB-D based methods contrasts with their rare application. One of the reasons is that although a great breakthrough in performance has been made, the previous RGB-D based road segmentation methods still cannot run in real-time. Another reason is that the previous methods are limited on the reliability of predicted road, as they cannot fully utilize the particular characteristics of RGB and depth data. Theoretically, RGB and depth data are two basic visual modalities that express the 2D space and 3D space, respectively. They hold different characteristics: RGB data is sensitive to light-dark contrast in the 2D spatial domain, while depth data is sensitive to distance change in the 3D spatial domain. How to perceive the characteristic difference is a key issue for RGB-D based road segmentation. The previous methods [10, 32, 31] commonly merge RGB and depth data by summing features of the two modalities to form a unified representation of road. However, RGB and depth features are indiscriminately fused in this design, bringing in conflict of feature representation in the sensitive region of RGB and depth data. It would lead to wrong segmentation inside these sensitive regions. To better explain this issue, we trained two networks that utilize RGB and depth image as input respectively. As shown in Fig.1 (c) (d), the RGB-based and depth-based models show different false positives, which are caused by their different representations of the road. This conflict cannot be eliminated by adding the two modality features , as shown in Fig.1 (e).
To address this issue, we abandon the mainstream idea of boosting the discriminative power of features adopted by many works [13, 40, 39, 24, 38] on other vision tasks. In this paper, we introduce evidence theory to fuse RGB and depth data and gain state-of-the-art performance with low computational cost. To this end, an uncertainty-aware symmetric network (USNet) is proposed, which perceives the uncertainties of each modality, and incorporates the uncertainties into the fusion process. Firstly, the cross-modal feature fusion utilized by previous works [10, 32, 31]
is abandoned. We instead design two light-weight subnetworks to learn road representations from RGB and depth inputs, avoiding feature-level conflict. Secondly, for each modality, a multi-scale evidence collection (MEC) module is proposed to collect evidence of a pixel belonging to road or non-road in multiple scales. Thirdly, based on the evidence given by the two subnetworks, an uncertainty-aware fusion (UAF) module is designed to perceive the uncertainty of each modality, and determine the probability and uncertainty that a pixel belongs to road or non-road. In practical application, the perceived uncertainty can be provided to the self-driving system for further obstacle discovery[35, 36]. As shown in Fig. 1 (f), by considering the uncertainty of each modality, our method eliminates the wrong segmentation completely. Our network achieves comparable performance with the state-of-the-art methods with an inference speed of 43 FPS, 410 faster than the former state-of-the-art methods. The contributions of our approach lie in:
The conflict of RGB and depth data in feature space is revealed, which guides us to propose a light-weight symmetric paradigm for RGB-D based road segmentation to avoid the feature conflict of different modalities.
Based on the evidence theory, a multi-scale evidence collection (MEC) module is proposed to obtain sufficient evidence of road and non-road, so that the well-designed uncertainty-aware fusion (UAF) module makes full use of the characteristics of RGB and depth data.
Ii Related Work
Ii-a Road Segmentation
Road segmentation has been studied for decades. It is vital in self-driving and mobile robot applications [3, 41]. After Long et al.  proposed the pioneering work, i.e., Fully Convolutional Network (FCN), for semantic segmentation, many methods based on the FCN framework have arisen. Previous methods can be divided into three groups: RGB-based methods [37, 12], RGB-LiDAR based methods [2, 14], and RGB-D based methods [10, 31, 33].
The RGB-based methods [30, 28] improve the accuracy of road segmentation by introducing extra cue, such as boundary [34, 24] and object [26, 21]. However, they may fail when there exist illumination variations in the RGB image. Considering the above problem, many approaches use LiDAR or Depth data as a supplement [2, 14, 33]. As for the RGB-D based methods, Wang et al.  propose an effective and efficient data-fusion strategy named DFM to fuse RGB and depth features. Based on the hypothesis that the road area in one image is coplanar, [10, 32, 31]
propose to estimate surface normal from the depth image and utilize the surface normal as the input of the network. However, the different characteristics of RGB and depth data have not been well utilized by the above-mentioned methods, thus leaving room for performance improvement.
Ii-B Evidence-based Learning
Vanilla neural networks make predictions by a deterministic learning pipeline. However, accurate predictions achieved on datasets are not sufficient to cope with the real world. It is also significant to obtain the reliability of the prediction. To this end, evidence-based methods propose a way to model the uncertainty of prediction[27, 15]. The idea of evidence-based learning comes from Dempster–Shafer Theory of Evidence (DST) . Subjective Logic (SL)  defines a framework that formalizes DST’s notion of belief assignments as a Dirichlet Distribution. Based on the SL, Sensoy et al.  propose a framework to model the uncertainty of classification tasks. After that, Han et al.  accomplish trusted classification by using Dempster’s combination rule to fuse classifications of multiple views. Inspired by , we follow the evidence-based fusion strategy to realize uncertainty-aware RGB-D road segmentation.
The theoretical basis of uncertainty modeling in our method is Subjective Logic (SL) . In this section, we give a brief review of it.
For -class classification, a front-end model is utilized to extract evidence, namely, information that supports a sample to be classified into a certain class. Based on the extracted evidence, SL assigns a probability (called belief mass) to each class, and generates an overall uncertainty (called uncertainty mass) of this assignment. The belief assignment is formalized as a Dirichlet distribution in SL, and the concentration parameters of the Dirichlet distribution are related to the evidence. To be specific, the evidence of each class is denoted as , where . And the concentration parameters of this Dirichlet distribution is pre-defined as . Then the belief mass and the uncertainty can be computed by:
where is the Dirichlet strength. Intuitively, the uncertainty is inversely proportional to the total evidence , which means the more evidence collected, the lower the uncertainty. According to Eq. 1, the belief masses and are all non-negative and their sum is one: . Given the belief assignment, the expected probability for the -th class is the mean of the corresponding Dirichlet distribution:
In this paper, we use a CNN to collect evidence from input images, and calculate the uncertainty of each pixel to guide the fusion of predicted road from RGB and depth data.
In this section, we propose an uncertainty-aware symmetric network (USNet), which is illustrated in Fig. 2. Note that, the depth image is processed by the surface normal estimator (SNE)  to generate a three-channel surface normal that is taken as the input for the depth subnetwork. This transformation is important for our method as it provides a more effective representation of the road. Different from the previous road segmentation methods [10, 32, 31] which conduct cross-modal feature fusion, the USNet learns two feature representations of road from RGB and depth image by two independent subnetworks (see Sec. IV-A), respectively. Then, sufficient evidence is collected from the well-designed multi-scale evidence collection (MEC) module (see Sec. IV-B). Finally, an uncertainty-aware fusion (UAF) module is introduced to determine the pixel class based on evidence of the two modalities (see Sec. IV-C).
Iv-a Unimodal Subnetwork to Obtain Road Representation
In USNet, the subnetwork for unimodal aims to extract the feature representation of road from an RGB or depth image. The subnetwork is mainly composed of an encoder and a light-weight decoder. To be specific, the ResNet-18  is exploited as the encoder for low latency. At the end of the encoder, inspired by , an atrous spatial pyramid pooling (ASPP) module  is employed to perceive multi-scale context for road segmentation. Subsequently, the side-outputs of the encoder are extended by several simple blocks, called feature compression and adaptation (FCA) blocks. In an FCA block, a convolution is applied to reduce the number of channels to 64, followed by a channel-wise attention extractor  to enhance the discriminative channels for road representation. The FCA block helps the subnetwork to restore scene details by incorporating the compressed feature into the decoder. The decoder part of the subnetwork follows a zero-parameter structure to reduce model complexity. It consists of three upsampling-summing stages to gradually upsample the feature to size of the original image. Each stage straightforwardly sums the upsampled feature from the previous stage with the feature from the corresponding FCA block. Note that the first stage takes ASPP output as the input. And the last stage of the decoder yields the finest 64-channel feature. In summary, the light-weight design ensures the high inference speed of USNet.
Iv-B Multi-scale Evidence Collection (MEC) Module
To sufficiently gain evidence for judging a pixel’s class, we append a multi-scale evidence collection (MEC) module after each subnetwork. The detailed structure of MEC is shown in Fig. 2. For the MEC of RGB subnetwork, given the RGB feature output by the decoder, denoted as , the MEC outputs two evidence maps , where indicates the pixel-wise evidence of non-road, and for that of road.
Specifically, MEC consists of three parallel paths. In each path, a convolution is first employed to gain a two-channel feature map at multiple scales. The kernel sizes of the convolution in these paths are , , and , and the dilation rates of the two convolutions are , , respectively. Then, an upsample layer is employed to upscale the two-channel feature map to the size of the original image. At the end of each path, a softplus activation layer is used to realize the non-negativity of all feature values, obtaining evidence maps , where is the index of the path. Then the final evidence of the RGB subnetwork is defined as the mean of these evidence maps in all paths:
For the depth subnetwork, we use another MEC to capture evidence maps from the depth feature.
By using the convolutions with different receptive fields, MEC is able to extract multi-scale evidence, which makes our method more reliable in determining each pixel’s class.
Iv-C Uncertainty-aware Fusion (UAF) Module
To accurately segment the road by considering modal characteristics, an uncertainty-aware fusion (UAF) module is proposed, which combines the evidence of RGB and depth modal under the guidance of uncertainty. Specifically, given the output evidence of MEC modules, i.e., for RGB modality and for depth modality, the probability of each pixel belonging to road is predicted by three steps.
The first step generates the belief masses and uncertainty of each modality. As illustrated in Sec. III, the belief assignment is formulated as a Dirichlet distribution. For the evidence maps of RGB modality, a Dirichlet strength map is defined as . Then the belief masses of non-road and road and the uncertainty map are obtained by using Eq. 1: , , and . Note that the ’’ denotes pixel-wise division in this subsection. We use to denote the belief assignment of RGB subnetwork. For a pixel, if its evidence of non-road and road, i.e., , are both small, it means that we lack evidence of the RGB modality to determine the category of the pixel. In this case, the pixel’s uncertainty would be very large. In the same way, the belief assignment of depth subnetwork can be obtained: .
Referring to Dempster’s combination rule , the second step merges the belief masses of two modalities into a fused belief assignment . In this step, the uncertainty of one modality is employed to re-weight the belief mass of the other modality in fusion:
where is element-wise multiplication, is the fused belief mass, is the fused uncertainty. measures the amount of conflict between the two assignments and is the normalization term. Observably, the modality with a lower uncertainty has a greater impact on the fused belief mass . In addition, the fused uncertainty becomes large iff uncertainties are both large.
The third step aims to predict the road probability of each pixel based on the fused belief assignment. As noted in the first step, a belief assignment follows a Dirichlet distribution. Thus, we can calculate the concentration parameter of Dirichlet distribution for the fused belief assignment:
where is the class number in road segmentation, and is the Dirichlet strength map. The road probability map is formulated as the mean of this Dirichlet distribution for the road class: . Finally, UAF outputs the road probability map and uncertainty map .
Comments: We visualize the belief masses of road and the uncertainties obtained by our model in Fig. 3. In the first example, UAF perceives the high uncertainty of depth modality around the distant boundary (see Fig. 3 (c)), and uses low-uncertain RGB modality to eliminate the error (see Fig. 3 (d)). In the second example, UAF gives a high uncertainty to the tree shade in RGB modality (see Fig. 3 (f)), which also helps to eliminate the false prediction by using depth modality (see Fig. 3 (h)).
|s-FCN-loc ||RGB||93.26||94.16||92.39||3.16||7.61||0.40 s|
|MultiNet ||RGB||94.88||94.84||94.91||2.85||5.09||0.17 s|
|RBNet ||RGB||94.97||94.94||95.01||2.79||4.99||0.18 s|
|RBANet ||RGB||96.30||95.14||97.50||2.75||2.50||0.16 s|
|LidCamNet ||RGB + LiDAR||96.03||96.23||95.83||2.07||4.17||0.15 s|
|CLCFNet ||RGB + LiDAR||96.38||96.38||96.39||1.99||3.61||0.02 s|
|PLARD ||RGB + LiDAR||96.83||96.79||96.86||1.77||3.14||0.16 s|
|PLARD (MS) ||RGB + LiDAR||97.03||97.19||96.88||1.54||3.12||1.50 s|
|NIM-RTFNet ||RGB + Depth||96.02||96.43||95.62||1.95||4.38||0.05 s|
|SNE-RoadSeg ||RGB + Depth||96.75||96.90||96.61||1.70||3.39||0.10 s|
|DFM-RTFNet ||RGB + Depth||96.78||96.62||96.93||1.87||3.07||0.08 s|
|SNE-RoadSeg+ ||RGB + Depth||97.50||97.41||97.58||1.43||2.42||0.08 s|
|Ours||RGB + Depth||96.89||96.51||97.27||1.94||2.73||0.02 s|
Iv-D Loss Functions for Uncertainty Learning and Fusion
Given an image , we assign a one-hot label , where denote height and width. Here, is the labels of pixel , is a binary value with meaning the value of pixel in the road mask, and for that in the non-road mask, where . denotes the total concentration parameter of both road and non-road, and denotes pixel on . The Dirichlet distribution can be formed as , where is the class assignment probabilities of pixel on a simplex, and is the pixel’s probability of class . Referring to , the adjusted cross-entropy loss is used to guide USNet to generate more evidence for the correct prediction:
where is the beta function and is the digamma function. And the following Kullback-Leibler (KL) divergence  is used to limit the evidence for the negative label to :
where is a filtered Dirichlet parameter, which is used to avoid the punishment to the positive label. is the gamma function. Then, the adjusted cross-entropy loss and this KL term are unified as follows:
where is the balance factor,
is the index of the current training epoch. We employ the unified loss to optimize the two subnetworks and the final prediction:
where is a factor empirically set to . and denote the parameters of the Dirichlet distribution of RGB and depth subnetworks, respectively. is the index of path of the MEC module. Our network is trained end-to-end based on the unified loss.
In this section, we conduct comprehensive experiments to validate the performance of the proposed network.
V-a Experiment Setup
Datasets: Our experiments are carried out on the KITTI road dataset  and Cityscapes dataset . The KITTI dataset is one of the most popular datasets for road segmentation. It contains 289 training images and 290 testing images. In the ablation study, we split the training set into two subsets: 253 samples for training and 36 samples for validating. The Cityscapes dataset is collected for urban scene semantic segmentation. It contains 2975 training images and 500 validating images annotated in 19 classes, while in our work, we only reserve the label for the road and re-label other classes as non-road.
Evaluation Metrics: For quantitative evaluation, we take the widely used pixel-wise segmentation metrics of road segmentation. The metrics include maximum F1-measure (MaxF), precision (PRE), recall (REC), false-positive rate (FPR) and false-negative rate (FNR). It is worth noting that the metrics are computed in the Birds Eye View (BEV) for the KITTI dataset as a common practice. We also evaluate the parameters, FLOPs, and inference time of our network.
Our network is implemented using Pytorch and trained on a single NVIDIA GTX 1080Ti GPU. A ResNet-18
model pre-trained on ImageNet is employed as the backbone of USNet. In our experiment, the input images are resized to for the KITTI dataset and for the Cityscapes dataset. The loss is optimized by the AdamW  optimizer. We set the learning rate to for parameters of the backbone and for other parameters during training. For data augmentation, we use Gaussian blur, Gaussian noise, random horizontal flip, and random color jitter on the input images.
V-B Evaluation Results
In this subsection, the results on the KITTI dataset and the Cityscapes dataset are given.
KITTI Benchmark: We report the performance on the KITTI benchmark in Table I. Our method exhibits a MaxF of , which outperforms all RGB-based methods and most RGB-LiDAR and RGB-D based methods. The boosting is mainly owing to the use of a more efficient uncertainty-aware RGB-D fusion strategy. Note that, the PLARD  that has a MaxF of is trained on multiple datasets, while our USNet is only trained on the KITTI dataset. Although the recent proposed SNE-RoadSeg+  achieves a higher MaxF, it is 4 slower than our method (80 ms vs. 20 ms). Thus, our method achieves a better trade-off between the accuracy and model’s capacity. Moreover, the pixel-wise uncertainty given by USNet has great significance in guiding other self-driving modules, e.g., obstacle avoidance, path planning, etc., which is not available in existing methods.
Qualitative results are shown in Fig. 4. In detail, the first column visualizes a street where the boundary between road and sidewalk is unclear, MultiNet  and PLARD  suffer from the false positives, while USNet and SNE-RoadSeg  segment the road accurately with the assist of depth image. In the second column, other methods generate false segmentation in the left lane, but our method avoids this mis-classification. The third column shows a scene with over-exposure. Our method outperforms the other three methods in this situation, as depth data is used to compensate for the weakness of the RGB image in this scene effectively. The last scene has the same problem as the first column, and our method consistently generates precise segmentation. These results prove the effectiveness and reliability of our method.
Cityscapes Dataset: We also conduct experiments on the Cityscapes dataset. In the training process, we only reserve the label for road and treat other classes as non-road. The samples containing no road pixels are excluded from our evaluation. As shown in Table II, our method exceeds all other methods, especially outperforming RBANet  by MaxF, precision, and recall, which proves the generalization of our method.
V-C Ablation Studies
This section gives considerable experiments to thoroughly analyze our network on the KITTI dataset.
Comparison with Feature Fusion Models: To verify the superiority of our fusion paradigm compared to previous works [10, 32, 33, 31], we test several feature fusion strategies by five variants. Each variant contains two encoders and one decoder, and the feature maps of corresponding layers in the two encoders are fused in different ways. As shown in Table III, the ‘Add’ indicates directly summing the feature of RGB and depth, and ‘Cat+Conv’ indicates fusing by concatenation and convolution. ‘RFNet’, ‘ACNet’ and ‘SA-Gate’ indicate using the fusion strategy proposed in the semantic segmentation networks [29, 18, 5], respectively. Our USNet achieves the best MaxF than those models using other fusion strategies. The comparison indicates that the different characteristics of the RGB and depth are not well perceived by directly fusing the features of the two modalities.
Effectiveness of Proposed Modules: We verify the effectiveness of each component in the proposed network, including two subnetworks, the multi-scale evidence collection (MEC) module, and the uncertainty-aware fusion (UAF) module. First, to evaluate the necessity of the two subnetworks, we conduct experiments based on only one modality. As shown in Table IV, the RGB-based model and depth-based model achieve and in terms of MaxF, while RGB-D based model improves the MaxF to , Note that, in the RGB-D based model, we fuse the segmentation result of the two subnetworks by simple addition. Furthermore, when MEC and UAF modules are utilized separately, the MaxF increases by and respectively, proving the effectiveness of MEC and UAF. Finally, when all these components are used, we obtain the best MaxF of . The reason is that the MEC provides more evidence to UAF for more sufficient determination.
V-D Efficiency Analysis
We analyze the computational efficiency of the proposed method in comparison with three open-source methods. All speeds are gained on a same computer equipped with an NVIDIA GTX 1080Ti GPU, and the input images are scaled intoresolution for a fair comparison. As observed in Table V, the parameters and FLOPs of our network is much fewer than other methods, which is owing to the simplified backbone and zero-parameter decoder. Meanwhile, our network can run at 43.6 FPS, 4 faster than MultiNet , and 10 faster than PLARD  and SNE-RoadSeg . In particular, compared to SNE-RoadSeg , i.e., an RGB-D based method, USNet reduces the parameter by , and the FLOPs by , but gains improvement on MaxF (see Table I). Thus, the USNet is more suitable for real-time applications intuitively and has the potential to further boost the speed to reach the requirement of embedded platforms.
In this work, we propose a novel low-latency RGB-D road segmentation network named USNet, which adopts a light-weight symmetric network to separately perceive road representations based on RGB and depth data. For collecting more valuable evidence from each subnetwork, an MEC is proposed. Besides, a UAF module is designed to obtain the uncertainty of the two modalities and generate the final segmentation. All these effective designs enable our model to work satisfyingly in terms of both accuracy and computational cost.
-  (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 39 (12), pp. 2481–2495. External Links: Cited by: TABLE II.
LIDAR-camera fusion for road detection using fully convolutional neural networks. Robotics and Autonomous Systems (RAS) 111, pp. 125–131. Cited by: §I, §II-A, §II-A, TABLE I.
-  (2020) RoadTrack: realtime tracking of road agents in dense and heterogeneous environments. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II-A.
-  (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 40 (4), pp. 834–848. External Links: Cited by: §IV-A.
Bi-directional cross-modality feature propagation with separation-and-aggregation gate for rgb-d semantic segmentation.
European Conference on Computer Vision (ECCV), Cited by: §V-C, TABLE III.
-  (2019) Progressive lidar adaptation for road detection. IEEE/CAA Journal of Automatica Sinica (JAS) 006 (003), pp. P.693–702. Cited by: §I, Fig. 4, TABLE I, §V-B, §V-B, §V-D, TABLE V.
The cityscapes dataset for semantic urban scene understanding. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 3rd item, §V-A.
A generalization of bayesian inference. Journal of the Royal Statistical Society: Series B (Methodological) 30 (2), pp. 205–232. Cited by: §II-B, §IV-C.
-  (2009) Imagenet: a large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §V-A.
-  (2020) SNE-RoadSeg: incorporating surface normal information into semantic segmentation for accurate freespace detection. In European Conference on Computer Vision (ECCV), Cited by: Fig. 1, §I, §I, §I, Fig. 2, §II-A, §II-A, Fig. 4, TABLE I, §IV, §V-B, §V-C, §V-D, TABLE V.
-  (2013) A new performance measure and evaluation benchmark for road detection algorithms. In International IEEE Conference on Intelligent Transportation Systems (ITSC), Cited by: 3rd item, §V-A.
-  (2017) Embedding structured contour and location prior in siamesed fully convolutional networks for road detection. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II-A, TABLE I, TABLE II.
-  (2021) Video text tracking with a spatio-temporal complementary model. IEEE Transactions on Image Processing (TIP) 30 (), pp. 9321–9331. Cited by: §I.
-  (2021) A cascaded lidar-camera fusion network for road detection. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §II-A, §II-A, TABLE I.
-  (2021) Trusted multi-view classification. In International Conference on Learning Representations (ICLR), Cited by: §II-B.
-  (2016) Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV-A, §V-A.
-  (2020) Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 42 (8), pp. 2011–2023. External Links: Cited by: §IV-A.
-  (2019) ACNET: attention based network to exploit complementary features for rgbd semantic segmentation. In IEEE International Conference on Image Processing (ICIP), Cited by: §V-C, TABLE III.
-  (2018) Subjective logic: a formalism for reasoning under uncertainty. Springer. Cited by: §II-B, §III.
-  (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §V-A.
TANet: robust 3d object detection from point clouds with triple attention.
AAAI Conference on Artificial Intelligence(AAAI), Cited by: §II-A.
-  (2015) Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-A, TABLE II.
-  (2019) Occlusion-shared and feature-separated network for occlusion relationship reasoning. In IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §IV-A.
-  (2017) Object-level proposals. In IEEE International Conference on Computer Vision (ICCV), Cited by: §I, §II-A.
-  (2016) Efficient deep models for monocular road segmentation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I.
-  (2017) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 39 (6), pp. 1137–1149. Cited by: §II-A.
Evidential deep learning to quantify classification uncertainty. In Neural Information Processing Systems (NIPS), Cited by: §II-B, §IV-D.
-  (2019) Reverse and boundary attention network for road segmentation. In IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Cited by: §I, §II-A, TABLE I, §V-B, TABLE II.
-  (2020) Real-time fusion network for rgb-d semantic segmentation incorporating unexpected obstacle detection for road-driving images. IEEE Robotics and Automation Letters (RA-L) 5 (4), pp. 5558–5565. External Links: Cited by: §V-C, TABLE III.
-  (2018) MultiNet: real-time joint semantic reasoning for autonomous driving. In IEEE Intelligent Vehicles Symposium (IV), Cited by: §I, §II-A, Fig. 4, TABLE I, §V-B, §V-D, TABLE V.
-  (2021) SNE-RoadSeg+: rethinking depth-normal translation and deep supervision for freespace detection. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §I, §I, §II-A, §II-A, TABLE I, §IV, §V-B, §V-C.
Applying surface normal information in drivable area and road anomaly detection for ground mobile robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §I, §I, §II-A, TABLE I, §IV, §V-C.
-  (2021) Dynamic fusion module evolves drivable area and road anomaly detection: a benchmark and algorithms. IEEE Transactions on Cybernetics (TCYB) (), pp. 1–11. External Links: Cited by: §II-A, §II-A, TABLE I, §V-C.
-  (2015) Holistically-nested edge detection. In IEEE International Conference on Computer Vision (ICCV), Cited by: §II-A.
-  (2019) A novel multi-layer framework for tiny obstacle discovery. In International Conference on Robotics and Automation (ICRA), Cited by: §I.
-  (2020) Tiny obstacle discovery by occlusion-aware multilayer regression. IEEE Transactions on Image Processing (TIP) 29 (), pp. 9373–9386. External Links: Cited by: §I.
-  (2017) RBNet: a deep neural network for unified road and road boundary detection. In International Conference on Neural Information Processing (ICONIP), Cited by: §II-A, TABLE I.
-  (2018) Objectness-aware tracking via double-layer model. In IEEE International Conference on Image Processing (ICIP), Cited by: §I.
-  (2012) Fusion with diffusion for robust visual tracking. In Advances in Neural Information Processing Systems (NIPS), Cited by: §I.
-  (2016) Similarity fusion for visual tracking. International Journal of Computer Vision (IJCV) 118 (3), pp. 337–363. Cited by: §I.
-  (2014) Online multiple person detection and tracking from mobile robot in cluttered indoor environments with depth camera. International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI) 28 (1), pp. 1455001.1–1455001.28. Cited by: §II-A.
-  (2018) Superpixel-based road segmentation for real-time systems using cnn.. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), Cited by: TABLE II.