Learning Common Representation from RGB and Depth Images

12/17/2018 ∙ by Giorgio Giannone, et al. ∙ NAVER LABS Corp. 0

We propose a new deep learning architecture for the tasks of semantic segmentation and depth prediction from RGB-D images. We revise the state of art based on the RGB and depth feature fusion, where both modalities are assumed to be available at train and test time. We propose a new architecture where the feature fusion is replaced with a common deep representation. Combined with an encoder-decoder type of the network, the architecture can jointly learn models for semantic segmentation and depth estimation based on their common representation. This representation, inspired by multi-view learning, offers several important advantages, such as using one modality available at test time to reconstruct the missing modality. In the RGB-D case, this enables the cross-modality scenarios, such as using depth data for semantically segmentation and the RGB images for depth estimation. We demonstrate the effectiveness of the proposed network on two publicly available RGB-D datasets. The experimental results show that the proposed method works well in both semantic segmentation and depth estimation tasks.



There are no comments yet.


page 1

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Visual scene understanding is a critical capability enabling robots to act in their working environment. Modern robots and autonomous vehicles are equipped with many, often complementary sensing technologies. Multiple sensors aim to satisfy the need for the redundancy and robustness critical for achieving the human level of the driving safety.

The most frequent case is RGB-D cameras collecting color and depth information for different computer vision tasks 

[5, 14, 28]. As information collected by the depth camera is complementary to RGB images, the depth can potentially help decode structural information of the scene and improve the performance on such tasks as object detection and semantic segmentation [28].

Development of convolutional neural networks (CNNs) boosted the performance of the image classification. Recently, their success has been replicated on object detection and semantic segmentation tasks. The key contribution of CNN models lies in their ability to model complex visual scenes. Current CNN-based approaches provide the state-of-the-art performance in semantic segmentation benchmarks 

[4, 9].

When RGB images are completed with depth information, the straightforward idea is to incorporate depth information into a semantic segmentation framework. Different methods have been developed including deep features pooling, dense feature, multi-scale fusion, etc. 

[7, 8, 11, 18, 24]. Most recent methods, like FuseNet [13, 15], use an encoder-decoder architecture, where the encoder part is composed of two branches that simultaneously extract features from RGB and depth images and fuse depth features into the RGB feature maps. Moreover, training individual RGB and depth views has been replaced with the joint learning. It was shown that the semantics predictions of jointly learned network can be fused more consistently than predictions of a network trained on individual views [23].

Fig. 1: Scenarios for RGB-D data include semantic segmentation from RGB (1), depth (3) or both (1+3), depth prediction from RGB (2) and depth completion from depth (4).

In this paper, we propose a new deep learning architecture for the tasks of semantic segmentation and depth estimation from RGB-D images (see Figure 1 for different scenarios). Usually, these tasks are addressed separately, with a special design for semantic segmentation [13, 23] or depth prediction [7]. We develop a unifying framework capable to cope with either task.

We adopt the multi-view approach to RGB-D data, where RGB and depth are complementary sources of information about a visual scene. All existing methods, whether they train the view models independently or jointly, adopt the fusion-based representation. The feature fusion takes benefit from the view complementarity to reduce the uncertainty of segmentation and labelling. The fusion-based approaches however require both views to be available at test time.

We revise the fusion-based approach and replace it with the common representation [25]. Adopting the principle of the common representation gives a number of benefits well studied in the multi-view learning [2]. First, it allows to obtain the common representation from one view and then reconstruct all other views. It can accomplish the task when one view is unavailable due to technical or other reasons, thus increasing the robustness and fault-tolerance of the system. Working with one-view data at test time enables cross-view scenarios rarely addressed in the state of art. In semantic segmentation, when the RGB view is unavailable, the depth view can be used to obtain the common representation and accomplish the semantic segmentation task. And vice-versa, in the case of a depth estimation, the common representation allows to use the RGB view to reconstruct the depth of a scene or an object.

Second, the common representation is the central component allowing to deploy the same architecture for both RGB-D tasks. Representation common to RGB and depth allows to enforce the consistency between the views and improve the segmentation quality and depth estimation accuracy.

Third, the proposed architecture encourages a higher modularity of the deep network. Our proposal combines the state of art components, the encoder-decoder networks for semantic segmentation and a multi-view autoencoder for the common representation. The system can then benefit from any progress in individual components. The modularity allows to upgrade a component without changing the entire system, training and optimization routines.

The remainder of the paper is organized as follows. In Section II, we review the state of art of semantic segmentation and depth estimation for RGB-D images. In Section III, we introduce the multi-view deep architecture and describe in details each component, the two-stage training and optimization. Section IV reports results of evaluating the network on two public RGB-D datasets; it also discusses some open questions. Section V concludes the paper.

Fig. 2: The architecture is composed of two encoder-decoder networks for RGB and depth images and the common representation network. Depending on the setting, the depth network is trained with the segmentation labels or depth ground true values.

Ii State of Art

Depth representation. Depth information is rarely used in any segmentation network as raw data, most methods use so called HHA representation of the depth [11]

. This representation consists of three channels: disparity, height of the pixels and the angle between normals and the gravity vector based on the estimated ground floor. The color code provided by HHA helps visualize depth information; it can reveal some patterns that resemble RGB patterns.

Semantic segmentation and depth estimation. These two fundamental tasks for RGB-D images are strongly correlated and mutually beneficial, and most efforts were on putting both views in one architecture. In particular, with the success of CNN architectures, many methods aimed to inject the depth information into the semantic segmentation network [8, 13, 18, 15, 23, 29].

Ladicky at al. [18]

were first to replace single-view depth estimation and semantic segmentation by a joint training model. They considered both semantic label loss and depth label loss when learning a classifier. Using properties of perspective geometry, they reduced the learning of a pixel-wise depth classifier to a simpler classifier predicting one of fixed canonical depth values 


Two separate CNN processing streams, one for each modality, were proposed by Eitel at al. [8]; they are consecutively combined in a late fusion network. The method also introduced a multi-stage training methodology for handling depth data with CNNs. It used the HHA representation of depth and the data augmentation scheme for robust learning with depth image.

A unified framework for joint depth and semantic prediction was proposed by Wang at al. [30]. Given an image, they first use a trained CNN to jointly predict a global layout composed of pixel-wise depth values and semantic labels. The joint network showed to provide more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction.

By considering RGB and depth channels as multi-view data, [23] enforced the multi-view consistency during training and testing. At test time, the semantic predictions of the network are fused more consistently than predictions of a network trained on individual views. The network architecture uses a recent single-view deep learning approach to RGB and depth fusion and enhances it with multi-scale loss minimization.

FuseNet [13] developed an encoder-decoder type network, where the encoder part is composed of two branches of networks that simultaneously extract features from RGB and depth images and fuse depth features into the RGB feature maps as the network goes deeper.

Although most of the above methods apply the late fusion, it is also possible to fuse depth information into the early layers of fully convolutional neural network [15]. Coupled with the dilated convolution for later contextual reasoning, it combines a depth-sensitive fully-connected CRF with the previous convolution layers to refine the preliminary result.

Depth Completion. The problem of completing the depth channel of an RGB-D image has been addressed in [31]. Indeed, it often the case that commodity-grade depth cameras fail to sense depth for bright, transparent, and distant surfaces thus leaving entire holes in the depth images. They train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation.

Ii-a Multi-view learning

In the previous section we reviewed different ways to fuse RGB and depth features. Meanwhile, there exist alternative representations for multi-view data [2]. One such alternative, the common representation learning (CRL), tries to embed different views of the data in a common subspace [2]. It allows to obtain a common representation from one view and use it to reconstruct other views.

Two complementary approaches to CRL are based on canonical correlation analysis (CCA) and multi-modal autoencoders (MAE). CCA based approaches learn a joint representation by maximizing correlation of the views when projected into the common subspace. Second approach to embed two views is based on multi-modal autoencoders (MAEs) [25]. The idea is to train an autoencoder able to perform two kinds of reconstruction. Given one view, the model learns both self-reconstruction and cross-reconstruction (reconstruction of the other view).

As CCA and MAE based approaches appear to be complementary, several methods tried to combine them in one framework [30]. For example, a MAE based approach called Correlational Neural Network (CorrNet) [3] tried to explicitly maximize the correlation between the views when projecting them into the common subspace.

Iii Deep Architecture

We aim to solve two fundamental tasks for RGB-D images: semantic segmentation and depth prediction. We assume that we are given a training set of RGB-D images (), . All images are assumed to be resized to width and height . Depth images are in HHA representation and have the same value range as RGB images, . RGB images are annotated with , where is the label set, . In the case of depth estimation, we assume to have additionally the ground true values .

We propose an architecture composed of two separate branches, one for each modality, which are consecutively fed into a common representation network. Two individual modality networks are of the encoder-decoder type, where the encoder applies dilated convolution to extract an informative feature map, while the decoder applies ”atrous” convolution at multiple scales to encode contextual information and refine the segmentation boundaries. This choice is motivated by the recent success of the encoder-decoder architecture of DeepLab network [4]. It has been also used in FuseNet [13] and SegNet [1] and has showed good segmentation performance.

Both RGB and depth encoders are initialized by the Resnet101 model trained on the COCO dataset. The encoders generate the feature maps, which the decoders use in ”atrous” spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales.

Feature maps generated by two modality branches are fed to the common representation network implemented in the form of a multi-view autoecoder [30]. Unlike the conventional fusion of RGB and depth feature maps, the multi-view autoencoder allows to extract the shared representation from either one or two views.

Iii-a Training RGB network

Our architecture enables two different settings. In the semantic segmentation (SS) setting, both RGB and depth views are jointly used to segment an image. In the case of segmentation and depth estimation (SS-D), we face the multi-task setting and expect the two views to achieve good results in both tasks.

We first proceed by training two individual modality branches (see the purple and blue streams in Figure 2). Let and be parameters of RGB and depth networks, respectively. Let be the feature map extracted from the last layer of the RGB decoder when applied to image , . Analogously, let be the feature map extracted from depth decoder when applied to the depth image , .

The network is trained in two stages. We first train the modality branches, then we train the entire network. In the first stage, we place a randomly initialized softmax classification layer on top of and train the RGB network to minimize the semantic loss of the training data. The semantic loss is defined as the cross-entropy loss


where is a pixel-wise prediction for image , is the ground truth labels, , and

is the probability of semantic label

at pixel .

The network is trained using the stochastic gradient descent on mini-batches of RGB images. After the convergence, all parameters

of the RGB network are kept for the second stage, except the last layer which will be replaced by the common representation and reconstruction layer.

Iii-B Training depth network

Training the depth branch depends on the task. In the SS setting, the depth network is trained, similarly to the RGB network, to minimize semantic loss where is a prediction for depth image .

In the SS-D setting, we train the depth branch to minimize the regression loss on the training depth data. We tested several state-of-art proposals for the loss function

. One is the scale-invarient loss [7]; it measures the relationships between points in the image irrespectively of the absolute values. We also considered the standard and smoothed loss [10]

. Less sensitive to outliers that the

loss, smoothed loss is defined as where


Iii-C Common representation

Common representation network is implemented as a multi-view autoencoder [3, 25]. It includes a hidden layer and an output layer. The input to the hidden layer is two feature maps from two modality branches. Similar to conventional autoencoders, the input and output layer has the same shape as the input, , whereas the hidden layer is shaped as , with being often smaller than (in Figure 2, =256 and =128).

Given a two-view input , the hidden layer computes an encoded representation as the convolution

where , are projection weights,

is a bias vector, and

is an activation function, such as

sigmoid or tanh.

The output layer tries to reconstruct

from this hidden representation

by computing

where , are reconstruction weights, is a output bias vector and is an activation function.

Given data from RGB and depth branches, the common representation is designed to minimize the self- and cross-reconstruction errors. The first is the of reconstructing from and from . The second one is the error of reconstructing from and from .

To achieve this goal, we try to find the parameter values by minimizing the reconstruction loss function , defined as follows

where is the reconstruction error, .

Iii-D End-to-end learning

Common representation allows to obtain the reconstructed feature maps for both RGB and depth images as and . The entire set of network parameters is . The objective function to minimize is then defined as


where the depth branch loss is in the SS setting and in the SS-D setting; is a scaling parameter for the reconstruction loss. In the above formulation, the semantic, depth and reconstruction losses are optimized jointly.

In addition to the common representation and view reconstruction, we also considered a possibility of maximizing the view correlation, as suggested in CorrNet [3]. In such a case, we try to maximize the correlation between the hidden representations of the two views. The correlation term can be included in the objective function , it makes sure that the hidden representations of the two views are highly correlated.

Iii-E Training and optimization

The architecture is implemented on the PyTorch framework. At the first stage, we train individual branches independently. In the SS setting, we train RGB and depth branches with segmentation labels, they are denoted RGB-SS and D-SS. Each branch is trained for 20,000 iterations using SGD with momentum 0.9, batch size 24, and minimizing modality losses,

and . We retain the model parameters and for the second stage.

In the SS-D setting, we train the RGB branch with segmentation labels (RGB-SS) and the depth branch with depth ground truth (D-D). The D-D branch is trained using the scale irrelevant loss loss or the smooth loss. We apply weight decay=0.0005 and the polynomial decay for the learning rate, with the base LR=0.0001 and power=0.9.

In the second stage, we train the entire network. We start with the two branch parameters and trained in the first stage, and refine them as well as the common representation network by minimizing the objective function which combines semantic, depth and reconstruction losses. We fine-tune the network end-to-end with the Adam optimizer, but we freeze parameters of two modality encoders, it allows to speed-up the training without performance loss.

For the segmentation task, we add data augmentation, by flipping and randomly rotating input images on an angle between [-10, 10] degrees. RGB-D images to be augmented are selected randomly, but the augmentation is identical for both views.

Iv Evaluation

We evaluated the proposed network on two publicly available RGB-D datasets: NYU depth dataset, 2nd version [24] and SUN [28]. NYU2 is a popular dataset, with 27 indoor categories but not all well represented. Following the publicly available split [24], 27 categories are reorganized, including 13 most common categories and an ’other’ category for all remaining images. The training/test split is 795/654 images. Images are resized to at training time, full size images are used at test time. SUN dataset contains 10,335 RGB-D images with 40 categories [28]. Following the publicly available split with 37 most common and other categories [12], it consists of 5,285 images for training and 5,050 images for testing. Images are resized to at training time; full size images are used at test time. All depth images are encoded using the HHA representation.

Fig. 3: left) Processing RGB-D images at different layers of the architecture. right) Reconstruction from one view.

Iv-a Qualitative analysis

We start with the qualitative analysis and test the proposed architecture with exemplar RGB-D images. Figure 3.a shows how a NYU2 example gets processed by the network. In addition to the input images and ground truth segmentation, it shows feature maps extracted at different layers of the network. The upper row refers to the RGB branch, the lower row refers to depth branch. Column 2 visualizes feature maps generated by two modality decoders. Column 3 shows the common representations obtained from each modality map. A close resemblance of the two maps supports the concept of common representation which can be obtained from either view. Then, the reconstructed feature maps for both views are shown in column 4 and final predictions in column 5.

Figure 3.b shows the cross-view reconstruction, where the RGB image is only available at test time. It starts with feature maps extracted from RGB network and the common representation. Then it shows how the common representation is used for two reconstruction and prediction maps.

Iv-B Quantitative results

Modularity. The proposed architecture is designed in a modular way. It does not use any particular techniques invented to improve the semantic segmentation in special cases and image regions, such as multi-scale input, CRF, overlapping windows and cross-view ambiguities [22, 15]. This choice is motivated by two main reasons. First, we wanted to test the effectiveness of the CRL in isolation, by excluding any impact of the additional improvements and the comparing to the state of art baseline. We paid most attention to the multi-view autoencoder and its capacity to generate common representation and reconstruct RGB and depth views. Second, the architecture design is general enough to cope with two different settings (SS and SS-D) and different modalities. We preferred the ability to work multi-modal and multi-task to an architecture narrowed to processing one particular task. Moreover, since the common representation is complementary to many of the state of art improvements, the proposed architecture can integrate most of them to boost the performance.

Evaluation metrics. To evaluate our network on the segmentation task, we prefer the intersection-over-union (IoU) score to the pixel accuracy. The pixel accuracy is known for being sensitive to the class disbalance, when many images include the large objects such as bed, wall, floor, etc.. The accuracy value may be misleading when the network performs better on the large objects and worse on the small ones. Instead, IoU score remains informative on both balanced and unbalanced datasets.

Let denote the number of pixels those are predicted as class but actually belongs to class , where . Then denotes the number of pixels with correct prediction of class . Let denote the total number of pixels that belongs to class in the ground truth, is the total number of classes in the dataset. Then IoU is the average value of the intersection between the ground truth and the predictions:

For depth estimation, we use the root mean square error (RMSE) that measures the error between the estimated depth and ground truth.

Hyper-parameters. We set for the NYU2 set and for the SUN set. Feature maps generated by modality branches are shaped with . The number of hidden variables in the common representation is fixed, . During the training with the objective function , weight of reconstruction loss is 1.

Iv-C Semantic segmentation and depth estimation

We consider three different ways to use the architecture presented in Section III to process the RGB-D data.

  • Independent learning: In this case, each modality branch is trained and tested independently. In the SS setting, we have RGB-SS and D-SS branches; in the SS-D setting, we train and test RGB-SS and D-D branches to provide the baseline performances.

  • Joint learning: The network is trained in two stages as described in Section III. In SS and SS-D settings, the network is trained to minimize the objective function , with the appropriate depth branch loss (see Section 3.4). In either case, we compare them to the baselines obtained with the independent training. In the SS setting, we test the common representation with one or two modalities available at test time, where the semantic segmentation is evaluated using the RGB, depth or both images. In the SS-D setting, we test the semantic segmentation and depth estimation with one or two modalities available at test time.

Table I reports IoU values for the SS and SS-D settings on NYU2 dataset. In the SS setting, training two modality branches independently yields 53.1 (RGB) and 37.1 (depth) IoU values; this reflects the RGB view being more informative than depth. Learning a joint model and using the common representation at test time improves the performance in cases when depth or both views are available. As both views address the segmentation task, the common representation makes performance dependent on which view is available at test time. Instead it does not depend which view is being reconstructed.

In the SS-D setting, the baseline for RGB-SS branch is the same, the baseline for depth reconstruction using D-D branch gives RMSE value of 0.51. The common representation improves the RGB value to 54.3, and reduces the reconstruction error to 0.39 and 0.53 when using the depth only or both views, respectively. In the cross-view reconstruction, using depth for segmentation drops IoU value to 35.0 only, using RGB for depth estimation yields 0.72 error.

Setting Branch Independent Common representation
RGB Depth RGB Depth RGB+D
NYU2 dataset
SS RGB-SS 53.1 - 54.1 41.2 57.6
D-SS - 37.1 54.2 41.1 57.7
SS-D RGB-SS 53.1 - 54.3 35.0 55.2
D-D - 0.51 0.72 0.39 0.53
SUN dataset
SS RGB-SS 39.7 - 39.4 31.1 42.4
D-SS - 31.1 39.4 31.1 42.3
SS-D RGB-SS 39.7 - 39.3 20.3 39.9
D-D - 0.36 0.62 0.31 0.31
TABLE I: Independent and joint learning with one or two views at test time. The best results are shown in bold.
Methods Sem. Segmentation Depth Est.
NYU2 dataset
Our method 54.1 41.2 57.6 0.72
Li et al. [20] - - - 0.82
Roy et al. [26] - - - 0.74
Laina et al. [19] - - - 0.57
Eigen et al. [6] - - 52.6 0.64
FuseNet-SF3 [13] - - 56.0 -
MVCNet-MaxPool [23] - - 59.0 -
SUN dataset
Our method 39.49 31.1 42.4 0.62
Segnet [1] 22.1 - - -
Bayesian-Segnet [17] 30.7 - - -
Hazirbas et Ma [13] 32.4 28.8 33.6 -
FuseNet-SF5 [13] - - 37.3 -
DFCN-DCRF [16] - - 39.3 -
Context-CRF [27] 42.3 - - -
RefineNet [22] 45.9 - - -
CFN [21] - - 48.1 -
TABLE II: Comparison to the state of art on different tasks.

Table I also reports evaluation results on SUN dataset. Using both views does improve the performance, moreover depth estimation benefits more from the common representation than the segmentation task.

We compare our results to the state of art on four typical scenarios for RGB-D images (see Table II). Our architecture is the only one able to cope with in all cases. moreover it remains competitive to the highly specialized architectures [20, 15] which cope with one or two scenarios only.

Iv-D Discussion

Both quantitative and qualitative results validated the effectiveness of learning the common representation from RGB and depth images. However the conducted experiments left some questions open; we discuss them in this section.

In addition to the results reported in Tables I and II, we tested a number of alternatives and made some conclusions. First, adding the view correlation (see Section III-D) does not seem to improve the common representation nor the performance. Second, the scale-irrelevant loss for the depth estimation, mentioned in Section III-B, does not seem to perform better than the standard and smoothed losses; all SS-D results in table I and II refer to the smoothed loss.

The two-stage training of the network enables to play with a so-called frozen configuration. The modality branches trained at the first stage get frozen and extract feature maps for all RGB-D images in the dataset. Such a frozen configuration allowed to test different configurations of common representation network before training the full network end-to-end. Below we finally mention some ideas on further improving the current architecture.

  1. The common representation is currently limited to one hidden layer. Using deeper multi-view autoencoders has been beneficial in the frozen case.

  2. Learning the common representation is implemented on one fixed scale () of the RGB and depth feature maps. We consider replacing one-fixed-scale MAE with multi-scale ones, on each level of the encoder-decoder networks.

  3. ResNet101 model pre-trained in COCO dataset fits well the segmentation task, but to the less extend the depth estimation task. We consider setting up a more appropriate pre-trained model or an option of training it from scratch or combine the two models [15].

V Conclusion

We proposed a new deep learning architecture for the tasks of semantic segmentation and depth prediction from RGB-D images. In the proposed architecture, the conventional feature fusion is replaced with a common deep representation of the RGB and depth views. Combined with an encoder-decoder type of the network, the architecture allows for a joint learning for the semantic segmentation and depth estimation based on their common representation. This approach offers several important advantages, such as using one modality at test time to build a common representation and to reconstruct the missing modality. We reported a number of evaluation results on two standard RGB-D datasets. Both quantitative and qualitative results validated the effectiveness of learning the common representation from RGB and depth images.


  • [1] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. PAMI, 39(12):2481–2495, 2017.
  • [2] Tadas Baltrusaitis, Chaitanya Ahuja, and Louis-Philippe Morency. Multimodal machine learning: A survey and taxonomy. CoRR, abs/1705.09406, 2017.
  • [3] A. P. Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, and Balaraman Ravindran. Correlational neural networks. Neural computation, 28 2:257–85, 2016.
  • [4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. PAMI., 40(4):834–848, 2018.
  • [5] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. Proc. CVPR, pages 3213–3223, 2016.
  • [6] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proc. IEEE ICCV, pages 2650–2658, 2015.
  • [7] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Proc. NIPS, pages 2366–2374, Cambridge, MA, USA, 2014. MIT Press.
  • [8] Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin A. Riedmiller, and Wolfram Burgard. Multimodal deep learning for robust rgb-d object recognition. IEEE/RSJ Intern. Conf. Intelligent Robots and Systems (IROS), pages 681–687, 2015.
  • [9] Alberto Garcia-Garcia, Sergio Orts, Sergiu Oprea, Victor Villena-Martinez, and José García Rodríguez. A review on deep learning techniques applied to semantic segmentation. CoRR, abs/1704.06857, 2017.
  • [10] Ross B. Girshick. Fast R-CNN. CoRR, abs/1504.08083, 2015.
  • [11] Saurabh Gupta, Ross B. Girshick, Pablo Andrés Arbeláez, and Jitendra Malik. Learning rich features from rgb-d images for object detection and segmentation. In Proc. ECCV, 2014.
  • [12] Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. Understanding real world indoor scenes with synthetic data. In Proc. CVPR, pages 4077–4085, 2016.
  • [13] Caner Hazirbas, Lingni Ma, Csaba Domokos, and Daniel Cremers. Fusenet: Incorporating depth into semantic segmentation via fusion-based cnn architecture. In ACCV, 2016.
  • [14] Joel Janai, Fatma Güney, Aseem Behl, and Andreas Geiger. Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art. CoRR, abs/1704.05519, 2017.
  • [15] Jindong Jiang, Zhijun Zhang, Yongqian Huang, and Lunan Zheng. Incorporating depth into both CNN and CRF for indoor semantic segmentation. CoRR, abs/1705.07383, 2017.
  • [16] Jindong Jiang, Zhijun Zhang, Yongqian Huang, and Lunan Zheng. Incorporating depth into both cnn and crf for indoor semantic segmentation. In IEEE Intern. Conf. Software Engineering and Service Science (ICSESS), pages 525–530. IEEE, 2017.
  • [17] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
  • [18] Lubor Ladicky, Jianbo Shi, and Marc Pollefeys. Pulling things out of perspective. Proc. IEEE CVPR, pages 89–96, 2014.
  • [19] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In 4th Intern. Conf. on 3D Vision (3DV), pages 239–248. IEEE, 2016.
  • [20] Bo Li, Chunhua Shen, Yuchao Dai, Anton Van Den Hengel, and Mingyi He. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs. In Proc. CVPR, pages 1119–1127, 2015.
  • [21] Di Lin, Guangyong Chen, Daniel Cohen-Or, Pheng-Ann Heng, and Hui Huang. Cascaded feature network for semantic segmentation of rgb-d images. In Proc. ICCV, pages 1320–1328. IEEE, 2017.
  • [22] Guosheng Lin, Anton Milan, Chunhua Shen, and Ian D Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proc. CVPR, volume 1, page 5, 2017.
  • [23] Lingni Ma, Jörg Stückler, Christian Kerl, and Daniel Cremers. Multi-view deep learning for consistent semantic mapping with rgb-d cameras. IEEE/RSJ Inter. Conf. Intelligent Robots and Systems (IROS), pages 598–605, 2017.
  • [24] Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. Indoor segmentation and support inference from rgbd images. In Proc. ECCV, 2012.
  • [25] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. Multimodal deep learning. In Proc. ICML, 2011.
  • [26] Anirban Roy and Sinisa Todorovic. Monocular depth estimation using neural regression forest. In Proc. CVPR, pages 5506–5514, 2016.
  • [27] Falong Shen, Rui Gan, Shuicheng Yan, and Gang Zeng. Semantic segmentation via structured patch prediction, context crf and guidance crf. In Proc. IEEE CVPR, volume 8, 2017.
  • [28] Song, Samuel P. Lichtenberg, and Jianxiong Xiao. SUN RGB-D: A RGB-D scene understanding benchmark suite shuran. In Proc. CVPR, 2015.
  • [29] A. Valada, J. Vertens, A. Dhall, and W. Burgard. Adapnet: Adaptive semantic segmentation in adverse environmental conditions. In 2017 IEEE Intern. Conf. Robotics and Automation (ICRA), pages 4644–4651, May 2017.
  • [30] Peng Wang, Xiaohui Shen, Zhe L. Lin, Scott Cohen, Brian L. Price, and Alan L. Yuille. Towards unified depth and semantic prediction from a single image. Proc. IEEE CVPR, pages 2800–2809, 2015.
  • [31] Yinda Zhang and Thomas A. Funkhouser. Deep depth completion of a single RGB-D image. CoRR, abs/1803.09326, 2018.