AcED: Accurate and Edge-consistent Monocular Depth Estimation

06/16/2020
by   Kunal Swami, et al.
4

Single image depth estimation is a challenging problem. The current state-of-the-art method formulates the problem as that of ordinal regression. However, the formulation is not fully differentiable and depth maps are not generated in an end-to-end fashion. The method uses a naïve threshold strategy to determine per-pixel depth labels, which results in significant discretization errors. For the first time, we formulate a fully differentiable ordinal regression and train the network in end-to-end fashion. This enables us to include boundary and smoothness constraints in the optimization function, leading to smooth and edge-consistent depth maps. A novel per-pixel confidence map computation for depth refinement is also proposed. Extensive evaluation of the proposed model on challenging benchmarks reveals its superiority over recent state-of-the-art methods, both quantitatively and qualitatively. Additionally, we demonstrate practical utility of the proposed method for single camera bokeh solution using in-house dataset of challenging real-life images.

READ FULL TEXT VIEW PDF

Authors

page 3

page 4

page 5

08/04/2019

Adversarial View-Consistent Learning for Monocular Depth Estimation

This paper addresses the problem of Monocular Depth Estimation (MDE). Ex...
09/27/2020

Adaptive confidence thresholding for semi-supervised monocular depth estimation

Self-supervised monocular depth estimation has become an appealing solut...
06/01/2016

Deeper Depth Prediction with Fully Convolutional Residual Networks

This paper addresses the problem of estimating the depth map of a scene ...
02/21/2021

Progressive Depth Learning for Single Image Dehazing

The formulation of the hazy image is mainly dominated by the reflected l...
09/04/2018

Optimized edge-based grasping method for a cluttered environment

This paper looks into the problem of grasping region localization along ...
12/21/2020

Monocular Depth Parameterizing Networks

Monocular depth estimation is a highly challenging problem that is often...
12/30/2019

Video Depth Estimation by Fusing Flow-to-Depth Proposals

We present an approach with a novel differentiable flow-to-depth layer f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Single image depth estimation (abbreviated as SIDE hereafter) has applications in augmented reality, robotics and artistic image enhancement, such as bokeh rendering [28]. However, the problem is highly under-constrained, a given 2D image can be mapped to any distinct 3D scene in real world. Recently, with the advent of deep learning, the problem of SIDE has witnessed significant progress [6, 5, 21, 15, 29, 18, 1, 19, 7, 31, 10, 8, 30, 20, 22, 16, 17]. Many SIDE methods train an encoder-decoder style network using a pixel-wise regression loss [2]. However, it is challenging for the network to regress true depth of a scene—with focal length adjustments, two different cameras placed at different distances from a target scene can capture identical 2D images [32, 4].

Inspired from [1], recently Fu et al. [7] proposed an ordinal regression based approach called DORN which outperformed other SIDE methods by a significant margin. However, it needs to be noted that DORN is trained using ordinal classification loss, while for inference the authors apply a naïve threshold strategy on the classification output to determine per-pixel depth label. The depth maps generated with this strategy do not obey smoothness and boundary constraints, and have severe discretization artifacts (see Fig. 1c). Consequently, the depth maps are not suitable for practical applications. In this work, we solve this important problem, while also advancing the state-of-the-art on challenging benchmarks.

(a) Input
(b) Ground-truth
(c) DORN[7]
(d) AcED
Figure 1: Example comparison with current state-of-the-art [7].

2 Related Work

Eigen et al. [6, 5] were first to attempt deep learning based SIDE. They also proposed a scale-invariant loss term for training a robust depth estimation network. [21] integrated conditional random fields (CRFs) into a CNN to learn unary and pairwise potentials of CRF. [15] proposed a residual learning based depth estimation model with faster upconvolutions. [18] proposed a two-streamed network to estimate depth values and depth gradients separately. [1] presented a pioneering approach by formulating depth estimation as a classification problem which outperformed all the previous methods. [19] proposed a deep attention based classification network, it involved re-weighting of channels in skip connections to handle varying depth ranges. Recently, [7] proposed an ordinal classification based approach which outperformed all the existing methods. However, depth estimation in [7] is not performed in end-to-end fashion, leading to sub-optimal results and depth artifacts. [31] proposed to estimate a coarse scale relative depth map which serves as a global scene prior for estimating true depth. [10] advocated the use of large rectangular convolution kernels based on the observations on depth variation along vertical and horizontal directions. [8] and [30] used attention mechanism [19] to fuse multiscale features maps. [20] proposed a piecewise planar depth estimation network to perform plane segmentation task. [16]

utilized Fourier transform based approach to combine multiple depth estimations.

Existing methods primarily focus on improving pixel-wise accuracy which does not usually correlate with qualitative aspects, such as depth consistency, edge accuracy and smooth depth variations [14]

. As a result, many current state-of-the-art methods generate depth maps which are not suited for practical applications. To summarize, following are the major limitations in existing methods: (a). many methods adopt pixel-wise regression approach which is a difficult learning task, (b). classification based SIDE approaches do not utilize output probability distribution during training, (c). many methods achieve good quantitative scores, however, the depth maps lack practical utility. In this work, we address these important limitations with several novel formulations in network design and loss function. The proposed approach targets quantitative as well as qualitative aspects of depth estimation. The proposed model–AcED–generates

Accurate and Edge-consistent Depth and achieves state-of-the-art results on challenging benchmarks. AcED also has a practical utility, as it enables challenging single camera bokeh application.

Following are the major contributions of this work: (a) a novel two stage SIDE approach comprising of ordinal classification and pixel-wise regression. (b) a novel fully differentiable variant of ordinal regression for end-to-end training. (c) a novel confidence map computation technique derived from proposed fully differentiable ordinal regression. (d) extensive experiments and ablation studies to demonstrate the advantages of algorithmic choices. (e) we show the utility of the proposed model in a challenging real life application.

3 Proposed Approach

(a)
Figure 2: Design of the proposed model AcED (best view in color).

3.1 Architecture Overview

Fig. 2 shows the detailed architecture of AcED, it can be conceptually divided into three subnetworks:

3.1.1 Dense Feature Extraction

SIDE is an ill-posed problem and it requires high degree of scene understanding. Existing methods adopt a CNN pre-trained on scene recognition task for dense feature extraction. Popular options include VGGNet

[26], ResNet [9], DenseNet [12] and SENet [11]. In this work, we adopt SENet-154 as the backbone encoder network because of its superior performance on image classification task.

3.1.2 Depth Estimation

This is the coarse scale depth estimation subnetwork which is trained using proposed ordinal regression loss. It estimates a coarse scale depth map along with a confidence map (see Section 3.3). This subnetwork (comprising of green blocks and the fully differentiable ordinal regression block in Fig. 2) upsamples the high-level feature maps using the low-level information via skip connections.

3.1.3 Depth Refinement

This is the depth refinement subnetwork (see Section 3.4), it takes coarse scale depth map, confidence map and multiscale low-level feature maps as input to correct the low confidence areas and generate depth map with improved structural information.

3.2 Depth Discretization

In order to formulate depth estimation as a classification problem, the depth map is discretized into multiple classes, where each class corresponds to a unique depth value. Similar to [7], we adopt spacing increasing discretization. If the depth range of a given training dataset is [, ] and is the desired number of discretization levels, a spacing-increasing discretization can be achieved by uniformly discretizing the depth range in logarithmic space. Mathematically, the depth discretization threshold is computed as follows:

(1)

In Eq. 1, [, ). We set = in all the experiments. This value is significantly less than that used by DORN [7] (=). However, as it will be shown in Section 4.4, the proposed model still outperforms DORN [7] and other recent state-of-the-art methods.

3.3 Fully Differentiable Ordinal Regression

First, we explain the ordinal classification technique. As described in Section 3.2, the ground-truth depth maps are discretized into levels.

binary classifiers are employed to train the depth estimation subnetwork, where the

classifier learns to predict whether the depth value of a given pixel is greater than the depth value belonging to label . To train the classifiers, a

size ground-truth rank vector is created for every pixel. As an example, if the actual depth value of a given pixel

belongs to (, ], the ground-truth rank vector for the pixel is encoded as , such that the first values are set to 1 and remaining values are set to 0. Fig. (a)a shows the graphical representation of a sample rank vector. The depth estimation subnetwork outputs feature maps where every two consecutive feature maps correspond to the output of a binary classifier. The pixel-wise ordinal classification loss on this channel output is computed as follows:

(2)
(a) True distribution
(b) Estimated distribution
Figure 3: Enabling fully differentiable ordinal regression.

In Eq. 2, is the ground-truth depth label. This loss is computed over all the pixels indexed using width and height tuple (,). Here, is computed by softmax operation over and channel, where [, ).

To generate depth map from the classification output during inference time, Fu et al. [7] employ a naïve threshold technique and convert the estimated probability distribution of every pixel to a binary rank vector. Finally, the depth value of a pixel is set to , where denotes the count of

s in the binarized rank vector and

refers to the depth discretization threshold (see Section 3.2). The depth map inferred in this manner does not follow boundary and smoothness constraints. Moreover, as a result of this hard inference technique, the network cannot be trained in an end-to-end manner, leading to suboptimal results.

In this work, we first analyze the true and estimated probability distribution of the rank vector of a pixel (see Fig. 3). Mathematically, the area of the true distribution curve in Fig. (a)a corresponds to the true depth label of the pixel. Similarly, in the estimated distribution in Fig. (b)b, the area of the distribution curve corresponds to the expected depth label of the pixel. Hence, the expected label of a pixel can be computed from its estimated rank vector as follows:

(3)

This computation is fully differentiable and allows us to train the network in complete end-to-end fashion. It also enables continuous and smooth depth transitions. The expected depth labels (treated as in Eq. 1) obtained for all pixels using Eq. 3 are converted to approximate true depths (coarse depth in Fig. 2) using Eq. 1, the depth range [, ] is considered same as that of the training dataset.

Additionally, we propose to measure the confidence associated with coarse depth map estimation. The confidence measure for the estimated depth of a given pixel point (,

) can be defined as its variance from the expected depth label. Ideally, the estimated rank vector should have probabilities closer to

before the expected depth label and probabilities closer to after the expected depth label. Hence, the confidence value of a pixel can be computed as follows:

(4)

Here, is the expected depth label for a pixel (,).

(a)
Figure 4: Multiscale feature fusion module. Input feature maps from fisrt four encoders are upsampled to a common scale and combined using x convolution after residual refinement.

3.4 Structure Refinement Module

We add a structure refinement module to refine the coarse scale depth map. This is a residual block with two x convolutions which takes the coarse scale depth map, confidence map and output of multiscale feature fusion module as input to generate a refined depth map. Fig. 4 shows the design of multiscale feature fusion module which takes low-level feature maps from the encoder as input and upsamples them to a desired common scale. The upsampled low-level feature maps are then processed by different residual blocks with two convolution layers and finally all the feature maps are concatenated and merged using x convolution.

3.5 Pixel-wise Depth Regression Losses

The depth refinement subnetwork is trained using pixel-wise regression losses. We use natural logarithm of the absolute difference between the estimated and ground-truth depth and their gradients as our loss function. The weights of these two loss terms are determined empirically using the validation dataset. In Eq. 5a and 5b, and refer to estimated and ground-truth depth maps respectively.

(5a)
(5b)

4 Experiments and Analysis

4.1 Datasets

4.1.1 NYU Depth V2

NYU Depth V2 [25] dataset contains indoor scenes captured with Microsoft Kinect. Like existing methods, we train our model on predefined scenes and evaluate on test images. To reduce training time, we sampled training images from scenes which is x lesser than DORN [7]. For training, we resize input images from x to x and randomly crop regions of size x. Similarly, test images are resized to x and the estimated depth map is upsampled to original resolution of x for comparison against ground-truth. We adopt the evaluation procedure of recent methods [3, 10, 16, 17] which use center crops of size x from estimated and ground-truth depth maps.

4.1.2 iBims-1

iBims-1 [14] is a new benchmark which aims at evaluating depth estimation methods on important qualitative aspects, such as depth boundary errors, depth consistency and robustness to depth range. This dataset contains images with dense depth ground-truth and is only used for evaluation purpose. Thus, this dataset is also useful to test the generalization of SIDE methods.

(a)
(b)
(c)
(d)
Figure 5: Qualitative evaluation of depth refinement. From left to right: input image, coarse scale depth map, confidence map and refined depth map. Notice the region inside black circle and area near edges.
rel log10 rms
Baseline 0.122 0.052 0.546 85.6 97.1 99.3
AcED 0.115 0.049 0.528 87.04 97.4 99.3

Table 1: Quantitative evaluation of model variants in ablation study.

4.2 Implementation Details

PyTorch framework was used for implementation. Adam optimization [13] was used with initial learning rate x and momentum term . A polynomial learning rate decay policy was applied with power term . The proposed model was trained on NYU Depth V2 dataset [7], while iBims-1 dataset was used only for evaluation. The model was trained for epochs using batch-size on NVIDIA P40 GPUs. Data augmentation in the form of random crop, brightness, contrast and color shift was performed on the fly.

(a)
(b)
(c)
(d)
(a) Input
(b) Ground-truth
(c) DORN[7]
(d) AcED
Figure 6: Qualitative comparison on NYU Depth V2 dataset.
rel log10 rms
Eigen & Fergus [6] 0.158 - 0.639 77.1 95.0 98.8
Chakrabarti [3] 0.149 - 0.620 80.6 95.8 98.7
Xu [29] 0.121 0.052 0.586 81.1 95.4 98.7
Li [18] 0.143 0.063 0.635 78.8 95.8 99.1
Zheng [31] 0.139 0.059 0.582 81.4 96.0 99.1
Heo [10] 0.135 0.058 0.571 81.6 96.4 99.2
Xu [30] 0.125 0.057 0.593 80.6 95.2 98.6
Liu [20] 0.142 0.060 0.514 81.2 95.7 98.9
Qi [22] 0.128 0.057 0.569 83.4 96.0 99.0
Lee [16] 0.139 - 0.572 81.5 96.3 99.1
DORN [7] 0.116 0.051 0.547 85.6 96.1 98.6
Lee [17] 0.131 - 0.538 83.7 97.1s 99.4
Zou [33] 0.119 0.052 0.580 85.5 96.6 99.1
AcED 0.115 0.049 0.528 87.04 97.4 99.3

Table 2: Quantitative comparison with recent state-of-the-art on NYU Depth V2 (as per evaluation procedure in Section 4.1.1).
(a) Input
(b) Ground-truth
(c) DORN[7]
(d) AcED
Figure 7: Qualitative comparison on iBims-1 dataset. Note: iBims-1 has different depth range and it is not used for training.
rel log10 rms DDE PE OE DBE
Eigen [5] 0.25 0.13 1.26 0.47 0.78 0.93 79.88 5.97 17.65 4.05
Laina [15] 0.26 0.13 1.20 0.50 0.78 0.91 81.02 6.46 19.13 6.19
Liu [21] 0.30 0.13 1.26 0.48 0.78 0.91 79.70 8.45 28.69 2.42
Li [18] 0.22 0.11 1.09 0.58 0.85 0.94 83.71 7.82 22.20 3.90
Liu [20] 0.29 0.17 1.45 0.41 0.70 0.86 71.24 7.26 17.24 4.84
DORN [7] 0.24 0.12 1.13 0.55 0.81 0.92 82.78 10.50 23.83 4.07
SharpNet [23] 0.26 0.11 1.07 0.59 0.84 0.94 84.03 9.95 25.67 3.52
AcED 0.20 0.10 1.03 0.60 0.87 0.95 84.96 5.67 16.52 2.06

Table 3: Quantitative evaluation on iBims-1 leaderboard.

4.3 Ablation Study

In order to justify our algorithmic choices, we train and evaluate the following two models on NYU Depth V2 dataset: (a) Baseline: This model does not include confidence map computation and depth refinement submodule and it is trained using ordinal classification technique. (b) AcED: This model includes confidence map computation and depth refinement submodule and it is trained from scratch in end-to-end fashion with same settings as baseline model.

Table 1 shows the quantitative comparison between the baseline model and AcED. It can be seen that AcED achieves significant improvement in all the quantitative metrics, proving the benefit of the proposed network and loss function design. In Fig. 5, it can be observed that the confidence map displays low confidence values near small gaps and occlusion regions. It can be seen that the depth areas with low confidence values are corrected in the refined depth map.

4.4 Results and Discussions

Finally, we evaluate AcED against the state-of-the-art methods. Standard metrics, viz., mean absolute relative error (rel), mean error, root mean squared error (rms) and accuracy under different thresholds ( where ) are used for evaluation (for detailed description refer [6]). Additionally, we use the new metrics proposed in [14] for evaluating qualitative aspects, such as depth boundary error (DBE), directed depth error (DDE) and planarity error (PE). DDE measures accuracy of depth at a given plane, PE and OE together reflect the accuracy of object shapes.

Table 2 shows the quantitative comparison of AcED with state-of-the-art methods on NYU Depth V2 dataset. AcED outperforms the recent state-of-the-art on majority of metrics. The accuracy of AcED is percentage points better than the second best score. The slightly lower rms value of AcED can be attributed to spacing increasing discretization coupled with long-tailed depth distribution, which leads to increased error in far depth regions. In Fig. 6, the qualitative results show clear benefits of the proposed appraoch, the depth maps of AcED are visually closer to ground-truth, have smooth depth variations and sharp edges compared to DORN [7].

Table 3 and Fig. 7 respectively show the quantitative and qualitative results of AcED on iBims-1 dataset [14]. Note that iBims-1 is used only for evaluation and its depth range is considerably different from NYU Depth V2. AcED tops the official leaderboard of iBims-1 benchmark with significant improvement in several metrics. AcED scores better on metrics associated with qualitative aspects, such as DDE, PE and DBE. The DBE of AcED is  17.5% lower than the second best score, indicating high accuracy of depth boundaries. The lower PE and OE values of AcED reveal that it is able to preserve object shapes in depth map much better than other methods.

It is important to note that [7, 16] perform multiple inferences or combine multiple depth estimates to generate final depth map. In contrast, AcED performs depth estimation in one forward pass. Furthermore, DORN [7] uses depth discretization levels and training samples, whereas AcED is trained with discretization levels and only training samples. AcED still outperforms DORN [7] which can be attributed to the proposed novel formulations which enable end-to-end optimization of the network.

(a)
(b)
(c)
(a) Input
(b) AcED depth
(c) Bokeh rendering
Figure 8: Practical utility of AcED in synthesizing bokeh effect. Notice increasing blur strenth with depth. Faces masked for anonymity.

Finally, Fig. 8 demonstrates the practical utility of AcED in the challenging single camera bokeh application. AcED was first trained using our in-house synthetic dataset [27] containing realistic human centric images with dense depth ground-truth. To reduce the computation load for this task, the light weight MobileNet V2 [24] model was employed as the backbone encoder network. The depth maps generated by AcED on real life images were combined with our human segmentation mask [28] to apply realistic bokeh effect with varying background blur. The challenging multi-person use-case in first row of Fig. 8 shows impressive bokeh result owing to accurate and edge consistent depth map generated by AcED.

5 Conclusions

A novel deep learning based two stage depth estimation model was proposed. This is the first work in literature to propose a two stage approach comprising of ordinal regression and pixel-wise regression for depth estimation. This work proposed a fully differentiable variant of ordinal regression for depth estimation. A novel confidence map computation method for depth refinement was also proposed. Systematic experiments were performed and the benefits of the proposed novel formulations were evaluated in the ablation study. The proposed model significantly outperformed the recent state-of-the-art methods on challenging benchmark datasets and also achieved top rank on one benchmark. The utility of the proposed model in a challenging practical application was also demonstrated.

References

  • [1] Y. Cao, Z. Wu, and C. Shen (2018) Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Trans. Circuits Syst. Video Techn.. Cited by: §1, §1, §2.
  • [2] M. Carvalho, B. L. Saux, P. Trouvé-Peloux, A. Almansa, and F. Champagnat (2018) On regression losses for deep depth estimation. In ICIP, Cited by: §1.
  • [3] A. Chakrabarti, J. Shao, and G. Shakhnarovich (2016) Depth from a single image by harmonizing overcomplete local network predictions. In NeurIPS, Cited by: §4.1.1, Table 2.
  • [4] W. Chen, Z. Fu, D. Yang, and J. Deng Single-image depth perception in the wild. In NeurIPS, Cited by: §1.
  • [5] D. Eigen and R. Fergus (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, Cited by: §1, §2, Table 3.
  • [6] D. Eigen, C. Puhrsch, and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In NeurIPS, Cited by: §1, §2, §4.4, Table 2.
  • [7] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao (2018) Deep ordinal regression network for monocular depth estimation. In CVPR, Cited by: Figure 1, (c)c, §1, §1, §2, §3.2, §3.2, §3.3, (c)c, (c)c, §4.1.1, §4.2, §4.4, §4.4, Table 2, Table 3.
  • [8] Z. Hao, Y. Li, S. You, and F. Lu (2018) Detail preserving depth estimation from a single image using attention guided networks. In International Conference on 3D Vision 3DV, Cited by: §1, §2.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §3.1.1.
  • [10] M. Heo, J. Lee, K. Kim, H. Kim, and C. Kim (2018) Monocular depth estimation using whole strip masking and reliability-based refinement. In ECCV, Cited by: §1, §2, §4.1.1, Table 2.
  • [11] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In CVPR, Cited by: §3.1.1.
  • [12] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In CVPR, Cited by: §3.1.1.
  • [13] D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In ICLR, Cited by: §4.2.
  • [14] T. Koch, L. Liebel, F. Fraundorfer, and M. Körner (2018) Evaluation of cnn-based single-image depth estimation methods. In ECCV Workshops, Cited by: §2, §4.1.2, §4.4, §4.4.
  • [15] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab (2016) Deeper depth prediction with fully convolutional residual networks. In International Conference on 3D Vision 3DV, Cited by: §1, §2, Table 3.
  • [16] J. Lee, M. Heo, K. Kim, and C. Kim (2018) Single-image depth estimation based on fourier domain analysis. In CVPR, Cited by: §1, §2, §4.1.1, §4.4, Table 2.
  • [17] J. Lee and C. Kim (2019) Monocular depth estimation using relative depth maps. In CVPR, Cited by: §1, §4.1.1, Table 2.
  • [18] J. Li, R. Klein, and A. Yao (2017) A two-streamed network for estimating fine-scaled depth maps from single RGB images. In ICCV, Cited by: §1, §2, Table 2, Table 3.
  • [19] R. Li, K. Xian, C. Shen, Z. Cao, H. Lu, and L. Hang (2018) Deep attention-based classification network for robust depth prediction. In ACCV, Cited by: §1, §2.
  • [20] C. Liu, J. Yang, D. Ceylan, E. Yumer, and Y. Furukawa (2018) PlaneNet: piece-wise planar reconstruction from a single RGB image. In CVPR, Cited by: §1, §2, Table 2, Table 3.
  • [21] F. Liu, C. Shen, and G. Lin (2015) Deep convolutional neural fields for depth estimation from a single image. In CVPR, Cited by: §1, §2, Table 3.
  • [22] X. Qi, R. Liao, Z. Liu, R. Urtasun, and J. Jia (2018)

    GeoNet: geometric neural network for joint depth and surface normal estimation

    .
    In CVPR, Cited by: §1, Table 2.
  • [23] M. Ramamonjisoa and V. Lepetit (2019) SharpNet: fast and accurate recovery of occluding contours in monocular depth estimation. In ICCV Workshops, Cited by: Table 3.
  • [24] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen (2018) MobileNetV2: inverted residuals and linear bottlenecks. In CVPR, Cited by: §4.4.
  • [25] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus (2012) Indoor segmentation and support inference from RGBD images. In ECCV, Cited by: §4.1.1.
  • [26] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §3.1.1.
  • [27] K. Swami, K. Raghavan, N. Pelluri, R. Sarkar, and P. Bajpai (2019) DISCO: depth inference from stereo using context. In IEEE ICME, Cited by: §4.4.
  • [28] N. Wadhwa, R. Garg, D. E. Jacobs, B. E. Feldman, N. Kanazawa, R. Carroll, Y. Movshovitz-Attias, J. T. Barron, Y. Pritch, and M. Levoy (2018) Synthetic depth-of-field with a single-camera mobile phone. ACM Trans. Graph.. Cited by: §1, §4.4.
  • [29] D. Xu, E. Ricci, W. Ouyang, X. Wang, and N. Sebe (2017) Multi-scale continuous crfs as sequential deep networks for monocular depth estimation. In CVPR, Cited by: §1, Table 2.
  • [30] D. Xu, W. Wang, H. Tang, H. Liu, N. Sebe, and E. Ricci (2018) Structured attention guided convolutional neural fields for monocular depth estimation. In CVPR, Cited by: §1, §2, Table 2.
  • [31] K. Zheng, Z. Zha, Y. Cao, X. Chen, and F. Wu (2018) LA-net: layout-aware dense network for monocular depth estimation. In ACM Multimedia, Cited by: §1, §2, Table 2.
  • [32] D. Zoran, P. Isola, D. Krishnan, and W. T. Freeman (2015) Learning ordinal relationships for mid-level vision. In ICCV, Cited by: §1.
  • [33] H. Zou, K. Xian, J. Yang, and Z. Cao (2019) Mean-variance loss for monocular depth estimation. In ICIP, Cited by: Table 2.