ADAADepth: Adapting Data Augmentation and Attention for Self-Supervised Monocular Depth Estimation

03/01/2021 ∙ by Vinay Kaushik, et al. ∙ Indian Institute of Technology Delhi 0

Self-supervised learning of depth has been a highly studied topic of research as it alleviates the requirement of having ground truth annotations for predicting depth. Depth is learnt as an intermediate solution to the task of view synthesis, utilising warped photometric consistency. Although it gives good results when trained using stereo data, the predicted depth is still sensitive to noise, illumination changes and specular reflections. Also, occlusion can be tackled better by learning depth from a single camera. We propose ADAA, utilising depth augmentation as depth supervision for learning accurate and robust depth. We propose a relational self-attention module that learns rich contextual features and further enhances depth results. We also optimize the auto-masking strategy across all losses by enforcing L1 regularisation over mask. Our novel progressive training strategy first learns depth at a lower resolution and then progresses to the original resolution with slight training. We utilise a ResNet18 encoder, learning features for prediction of both depth and pose. We evaluate our predicted depth on the standard KITTI driving dataset and achieve state-of-the-art results for monocular depth estimation whilst having significantly lower number of trainable parameters in our deep learning framework. We also evaluate our model on Make3D dataset showing better generalization than other methods.



There are no comments yet.


page 1

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Depth from a single image has been of utmost importance in computer vision community with the advent of deep learning. Depth prediction provides solutions for several applications including smart mobility

[27], smartphone AR [37], 3D zooming [1], face anti-spoofing [24], image dehazing [20], etc. Humans are able to perceive depth in the visible world by utilising cues like occlusion, texture differences, relative scale of neighbouring objects, lighting and shading variations along with object semantics.

Multi-view and stereo methods are computationally expensive and have high memory overheads. Depth from single image drastically reduces these complexities and is favourable for real-time systems. Deep learning provides the tools to predict depth from a single image by transforming the task into a learning problem[7, 8], given the ground truth depth annotations. However, capturing vast amount of ground truth data in different scenarios is a formidable task. Self-supervision for computing depth eliminates this limitation by utilising photometric warp for learning depth[12, 10].

Learning from a monocular sequence is challenging due to scale ambiguity and unknown camera pose. Thus, there’s an explicit need to compute camera egomotion[13, 48]. The necessity of joint learning for depth and egomotion means that the quality of depth is highly dependent on the correctness of camera pose. Also, static scene assumption in self-supervised learning paradigm leads to holes and aberrations in pixels belonging to a moving object in the scene. Occlusions at image boundaries makes it difficult to learn depth near boundary regions (bottom image region in a forward moving camera). Although there have been innovations in deep learning architectures[14, 17]

, loss functions

[12, 13], masking strategies [38, 51, 25, 13], there is still a huge scope of improvement to bridge the gap between self-supervised and supervised methods. This paper aims to reduce that gap by incorporating novel relational self-attention and data augmentation utilising learnt depth.

Fig. 1: Depth predicted from our network

We utilise a ResNet18 encoder for our ablation and quantitative analysis and show substantial improvements in learning depth. Our main contributions are as follows:

  • We introduce data augmentation as a supervisory loss, improving depth at occluded edges and image boundaries while making the model more robust to illumination changes and image noise.

  • Our self attention module learns optimal feature relations that drastically improve our depth prediction.

  • We show that our novel progressive learning strategy learns robust scale-invariant features leading to significant improvements in depth prediction while saving huge computational overhead of training a high resolution model from scratch.

Our network can predict state-of-the-art depth while having significantly lower number of parameters.

Fig. 2: Architecture diagram

Ii Related Work

Depth estimation from a single colored image is a challenging task due to the obscure nature of this problem. A single depth map can be mapped to innumerable possible colored images. Over the last few years learning models have proven to be successful in effectively learning and exploiting this relationship between color images and their corresponding depths.

Ii-a Supervised Depth Estimation


was one of the first ones to explore end to end supervised learning of depth from a single colored image using a multi-scale deep neural network. He trained a model to learn directly from raw colored images and their corresponding depths. Several different approaches have been proposed since then.

[34] introduced a patch-based model which generated super-pixels to combine local information. [18] used a non-parametric scene sampling pipeline where candidate images from the dataset were matched with target image using high level image and optical flow features.

Acquiring large amounts of ground truth data in the real world is a challenge and this creates large overheads, both in terms of cost and time as it requires use of lasers like LIDAR. This is the reason that supervised models, despite their superior performance, are not universally applicable. As a result several works have turned to unsupervised or weakly supervised models and use of synthetically generated data.

[44] used real world size of objects to compute depth maps. They used geometric relations to calculate depth maps which were then refined using energy function optimization. [3] used relative depth annotation instead of actual ground truth depth data. They learned to estimate metric depth using relative depth annotations. These works however, still require supervision signals in the form of additional set of depths or other annotations. Generating large amounts of realistic synthetic data that includes several types of variations found in the real world is not a superficial task as well.

Ii-B Self-supervised Depth Estimation

A more promising substitute for supervised and weakly supervised models is the self supervised approach. Either stereo or monocular inputs are used for these models. Depth, hallucinated by the model, is used to warp the source image into the target frame. The difference between the reconstructed and reference frame is penalised and added as a reconstruction loss to provide a supervisory signal to the model.

Ii-B1 Self-supervised Stereo Training

For self-supervised stereo depth estimation, synchronized stereo image pairs are fed into the model. The model estimates disparity or inverse depth between the two frames and in the process learns to predict the depth of single images. Garg [10] presented an approach that reconstructed left images by inverse warping the right images using the predicted depth and known camera extrinsics. The photometric error between the reconstructed image and the original images was used to train the encoder. [12] incorporated a left-right consistency term amongst other losses. [43] utilised stereo matching to provide sparse supervision in form of depth hints to predict depth. Since then several works have refined self-supervised stereo training of depth. However, some problems still plague stereo estimation. Occlusion drastically affects stereo frames due to the fixed baseline between cameras. Also, wide baseline stereo data might not be available in all real world scenarios e.g. mobile phone camera.

Ii-B2 Self-supervised Monocular Training

Self supervised monocular depth estimation is naturally unimpeded by a lot of these restraints. In monocular training, temporally consecutive frames are fed into the model instead of stereo pairs. The model has to also learn pose in addition to depth due to the unknown and varying baseline. Zhou et al[51] provided one of the initial works in this domain where they used an end to end learning approach with supervision provided by view synthesis. They used two separate networks for learning depth and pose. [13] used a minimum reprojection loss to handle occlusion and prevent the network from learning erroneously from occluded pixels. They computed an automasking framework to prevent learning depth from stationary pixels(static camera). Several works have also incorporated optical flow estimation in their pipelines and tried to exploit relationships between depth, pose and optical flow to achieve more accurate results. [52] proposed a cross-task consistency loss, [31] performed motion segmentation, [48] decomposed motion into rigid and non rigid components and used a residual flow learning module to handle non rigid cases, [4] used losses that ensured 3D structural consistency and enforced geometric constraints, Net[5] fuses semantic constraints into depth framework, Shu[36] introduces a feature metric loss computed from FeatureNet to improve depth. Huynh [16] formulates a depth attention volume for guiding monocular depth. Xian [45] constructs a structure guided ranking loss for self-supervised learning of depth.

Ii-B3 Self-Attention in Deep learning

Wang[41] introduced self-attention as a non-local operation by correlating response at a spatial position as weighted sum of features at all positions. Building on the same framework, Zhang[50] utilised self-attention in GANs for image generation tasks. Fu[9] formulated a dual attention network for semantic segmentation that unlike traditional works which focus on multi-scale feature fusion, focused on self-attention to integrate local features with their global dependencies adaptively. Since then, self-attention has already been utilised in medical applications[28], video recognition, semantic segmentation[15], object detection[29] and video understanding[22]. Unlike a convolutional operation, self-attention provides the ability to learn features and dependencies in non-contiguous regions making it an important building block of deep learning frameworks. We formulate a relational self-attention mechanism, learning from relational reasoning[33] to embed better context in the self-attention framework. Our model achieves better accuracy without learning for optical flow or motion segmentation by encompassing robust geometric constraints, a relational self-attention framework and utilising augmentation for depth supervision along with our progressive learning strategy.




MD2 MS [13]


MD2 M [13]


DDV M [17]



Fig. 3: Qualitative results on the KITTI Eigen split [6] test set. Our models perform better on thinner objects such as trees, signs and bollards, as well as being better at delineating difficult object boundaries. The depth of far objects including sky is further improved.

Iii Methodology

Self-supervised learning utilising photometric consistency has become the de-facto standard for learning depth without ground truth data. The problem of depth prediction is transformed into a problem of view synthesis, where the goal is to use predicted depth of the input image to find per pixel correspondence for reconstructing the input image from another view. By solving for view synthesis, we can train our network to predict depth. We utilise the same approach while incorporating multiple novel data-driven and geometric constraints. Here, we describe a model that jointly learns to predict depth and pose. The network comprises of a shared VGG encoder, depth decoder and pose sub-network. The encoder takes an RGB image as input and extracts it’s features that are utilised by both depth decoder and pose sub-network. For training our network we use a 3 frame sequence, where the middle frame is target image and the remaining two frames are source images , . We predict target depth , source depth and , pose and pose , where pose is the 6DoF transformation from target to the source.

We first outline our training model architecture along with the necessary notations required in formulating losses for training our model then describe in detail the geometric constraints of depth prediction. We describe in detail the augmentation loss framework and the self-attention module for and then delineate each loss along with it’s significance in our algorithm.

Iii-a Training Model Architecture

As shown in 2, our model consists of a ResNet18 encoder [12]

taking an RGB image as input. Features extracted from the source

and target

images are concatenated and fed to the pose sub-network to compute the 6x1 egomotion vector. Our depth decoder takes in feature of the target image

to predict depth of that image. The encoder-decoder framework is similar to the U-Net architecture introduced by [32], that enables us to encapsulate both global as well as local features while predicting depth at 4 scales. The relational attention module takes input as encoder’s features and generates attention maps that are concatenated to the original features and fed to the depth decoder as in Figure2. The pose network comprises of 4 convolutional layers to get a 6x1 output vector[40] containing rotation(3x1) and translation (3x1) information as shown in Figure2. We use Sigmoid activation at depth outputs and ELU activation everywhere else[12]. The target image and it’s corresponding predicted depth is then processed by the augmentation pipeline to get transformed augmented image and true augmented depth . is then fed to the network to predict output augmented depth . The model returns for computing training losses. Target depth warps source image to compute synthetic image by using bi-linear sampling for sampling source images. While testing, the network can simply compute from .

Method Abs Rel Sq Rel RMSE RMSE log < < <
Zhou et al[51] 0.183 1.595 6.709 0.270 0.734 0.902 0.959
Yang et al[47] 0.182 1.481 6.501 0.267 0.725 0.906 0.963
Mahjourian et al[26] 0.163 1.240 6.220 0.250 0.762 0.916 0.968
Geonet[48] 0.149 1.060 5.567 0.226 0.796 0.935 0.975
DDVO[40] 0.151 0.125 5.583 0.228 0.81 0.936 0.974
LEGO[46] 0.162 1.352 6.276 0.252 - - -
DF-Net[49] 0.150 0.124 5.507 0.223 0.806 0.933 0.973
Ranjan et al[31] 0.148 0.149 5.464 0.226 0.815 0.935 0.973
EPC++[25] 0.141 1.029 5.350 0.216 0.816 0.941 0.976
Struct2Depth(M)[2] 0.141 1.025 5.290 0.215 0.816 0.945 0.979
Monodepth2[13] 0.115 0.882 4.701 0.190 0.879 0.961 0.982
DDV[17] 0.106 0.861 4.699 0.185 0.889 0.962 0.982
Proposed Approach 0.108 0.745 4.436 0.181 0.889 0.966 0.984
TABLE I: Self-supervised depth prediction results on KITTI dataset [11] trained at 1024 x 384 resolution. Results on Eigen split [7] for depths at cap 80m, as described in [7].

Iii-B Constraints for depth prediction

In this section, we describe the formulation of various loss functions used in our network for self-supervised learning of depth and pose.

Iii-B1 Minimum Photometric Loss

As described by [13], this loss is a slight variation from the normal photometric loss. Instead of taking per pixel average of photometric loss for all sources, we compute minimum of photometric loss for all sources. This successfully tackles scenarios where a target pixel is visible in one source image but not visible in the other source image due to occlusion and only back-propagates the minimum error, thereby ignoring the erroneous one.


Here, photometric error is defined by a weighted combination of L1 loss and Structural Similarity (SSIM) [42], similar to [13][4][31].


,where Similar to [13], we apply a per pixel binary mask to the computed losses. The mask is generated by comparing the photometric error between source and target frames with that between the synthesised source and target frames.


This eliminates static pixels from corrupting the loss and the network skips learning depth altogether if the camera isn’t moving. We observe that although this improves depth prediction drastically, it leads to random white noise around static regions and makes the learning of depth more sensitive to noisy images. This happens because the mask doesn’t consider neighbouring pixels while comparing photometric errors and simply takes a threshold of per pixel values. To alleviate this problem, we enforce a L1 regularisation over inverse of

, thereby motivating the mask to be positive for those sparse number of pixels.


We compute first order gradient smoothness loss [12] over mean normalized inverse depth [40] to ensure that the predicted depth is locally smooth as well as consistent in textured regions.


where and are gradients in horizontal and vertical direction respectively.

Iii-B2 Data augmentation for depth supervision

Several works have utilised data augmentation[13, 30, 17] in their deep learning pipeline to make their networks more robust to challenging scenarios and invariant to changes in noise, brightness and contrast that are common in the real world. Traditionally, pipelines performed data augmentation at the data loading stage and the augmented data was fed into the network for training. We utilise augmentation for generating augmented inputs and outputs that are used to train the network in a semi-supervised manner. We incorporate an augmentation loss, in the form of a depth supervision, that improves the predicted depth. While training, in the first forward pass, the network takes as input, giving depth as network output. We pass the pair

to the augmentation pipeline, applying identical random image cropping, flipping, skewing and scaling and affine transformations to both. Additionally, we perform random changes to brightness, jitter, gamma and saturation to the input image

. We also add random gaussian noise to . The augmentation pipeline returns augmented image and true augmented depth . While training, in the second forward pass, is fed to the network generating augmented predicted depth . Augmented depth maps generated in first pass serve as ground truth for depth maps generated in the second pass. The augmentation loss minimises the difference between the output augmented depth and the true augmented depth, enforcing both depths to be consistent with each other.


Due to camera egomotion, occlusion is present at certain image boundaries. Rescaling and crop transformations randomly remove boundary regions from the image, while ensuring that it’s size remains the same. Thus, the boundaries of the augmented depth are more accurate due to lower probability of occlusion.

Iii-B3 Relational Self-Attention

The relational self-attention block takes input the features from the ResNet18 encoder and computes self-attention

that is added as a residual connection to the input feature

to compute the output feature. The operation can be summarised as follows:


Here, is a weight factor that projects the concatenated vector to the scalar by performing a convolution with single output channel, denotes concatenation, defines the number of positions in . Also, the functions , and are defined by 2D convolution operations as shown in Figure 2. The input generates the projection, query, key and value embeddings as


where , , , are weight matrices to be learnt. The pairwise relation between query and key is projected by and multiplied by value to compute our relational self-attention that is then element-wise added to input to give output of our attention block.

The output is then concatenated with the encoder’s features and utilised by the decoder to computed multi-scale depth.

Iii-B4 Final Training Loss

We combine photometric and smoothness losses with our data augmentation loss along with regularization over mask to obtain our final objective.


All our losses are computed per-pixel and averaged over entire image, scales and batch.

Type Abs Rel Sq Rel RMSE
Karsch [19] D 0.428 5.079 8.389 0.149
Liu [23] D 0.475 6.562 10.05 0.165
Laina [21] D 0.204 1.840 5.683 0.084
Monodepth [12] S 0.544 10.94 11.760 0.193
Zhou [51] M 0.383 5.321 10.470 0.478
DDVO [39] M 0.387 4.720 8.090 0.204
Monodepth2 [13] M 0.322 3.589 7.417 0.163
DDV [17] M 0.297 2.902 7.013 0.158
Proposed Approach M 0.289 2.552 6.869 0.155
TABLE II: Make3D[35] results. All self-supervised monocular (M) methods use median scaling.

Iv Experiments and Results

This section introduces the dataset and describes the training details. We describe in detail various comparative qualitative and quantitative studies along with an ablation study undertaken for validation and show that our method surpasses all other existing related methods.

Aug Loss Attention Abs Rel Sq Rel RMSE RMSE log < < <
No No 0.115 0.919 4.854 0.194 0.877 0.958 0.980
Yes No 0.113 0.837 4.726 0.189 0.879 0.961 0.982
No Yes 0.111 0.827 4.742 0.189 0.878 0.960 0.982
Yes Yes 0.111 0.817 4.685 0.188 0.883 0.961 0.982
TABLE III: Ablation study for depth prediction at 640x192 image resolution using ResNet18 Encoder on Eigen split[7]. We observe that the combination of Augmentation Loss and our Attention framework gives us the best depth results.
Method Backbone Abs Rel Sq Rel RMSE RMSE log < < <
Monodepth2 [13] ResNet18 0.115 0.902 4.847 0.193 0.877 0.960 0.981
DDV [17] ResNet18 0.111 0.941 4.817 0.189 0.885 0.961 0.981
Proposed Approach ResNet18 0.111 0.817 4.685 0.188 0.883 0.961 0.982
Proposed Approach 1024x384 ResNet18 0.108 0.745 4.436 0.181 0.889 0.966 0.984
TABLE IV: Comparing our method at 640x192 resolution with other methods utilising same network backbone.
Depth Decoder’s Input Abs Rel Sq Rel RMSE RMSE log < < <
Attention + No feature concat 0.113 0.879 4.777 0.190 0.880 0.959 0.981
Attention in all skip connections + Feature concat 0.112 0.856 4.699 0.188 0.880 0.961 0.982
Attention + feature concat in all skip connections 0.112 0.866 4.742 0.189 0.879 0.960 0.982
Attention + Feature concat 0.111 0.817 4.685 0.188 0.883 0.961 0.982
TABLE V: Comparing our method at 640x192 resolution with multiple variations of self-attention features. Augmentation loss is applied during training. Feature is the ResNet18 encoder’s output feature.

Iv-a Dataset

Our model was trained on KITTI 2015 dataset[11]. This dataset comprises videos captured by a camera mounted on a car moving through the German city of Karlsruhe and is widely recognized and often used for tasks like estimation of depth, optical flow and car’s egomotion. We used the Eigen test split[7] of this dataset and tested our model using the ground truth labels present in it. The test set consists of 697 images and it is ensured that frames that are similar to those present in the test set are removed from the training set. We also test our trained model on the 134 images in Make3D dataset[35].

Iv-B Parameter Settings

Similar to other self-supervised models[13][31][4]

, we use ImageNet weights for initialising our network and train our model using a single NVIDIA 2080Ti GPU. Three temporally consecutive images are fed into the model and Adam optimizer with

and is used. Initial learning rate is set to and batch size to 12. While optimizing our network we set weights of different loss terms to and . While preparing training data, static frames are removed from the dataset as proposed by Zhou et al[51]

. Basic augmentation in the form of random cropping, color jittering, resizing and flipping is also performed as part of our data preparation pipeline. We train our network over two phases in a progressive manner. In the first phase, images of 640x192 resolution are fed into the network. After training it for 50 epochs, in the second phase, we freeze the pose encoder and feed the higher resolution 1080x384 images to the depth network and train our model for 5 epochs with batch size 2. Progressive training aids in further improvement and faster convergence in our depth prediction model at a higher resolution as shown in Table


Iv-C Main Results

We compare our results with other recent models in Table I. These results show that our monocular model is able to comprehensively outperform all existing state of the art self-supervised monocular methods. Our model is even able to surpass methods that incorporate optical flow prediction into their pipeline[48], [4],[31], while having lower number training parameters. During evaluation, as common practice[7], we cap depth to 80m. Table IV shows the comparison with DDV[17] and moonodepth2[13] trained using features from same ResNet18 encoder. Our method has better RMSE, Sq Rel, Abs Rel than other similar methods. This shows that our model performs significantly better in all metrics on Eigen split of KITTI 2015 dataset.

Iv-D Qualitative Analysis

Figure 3 displays qualitative improvements in our method over baseline Monodepth2(MD2)[13] and DDV[17]. Our algorithm retains structural details in objects like poles, sign boards and trees while learning smooth depth over entire scene. We also have the least noise in disparity values of the infinitely distant sky.

Iv-E Make3D

Table II shows results of our model that is trained on KITTI dataset and tested on the Make3D dataset[35]. We use the crop defined by [51] and apply depth median scaling for fair comparison. The table shows our method’s superior performance than other self-supervised methods while bridging the gap between supervised ones[21].

Iv-F Ablation Study of Losses

We also undertake exhaustive quantitative comparison of all the losses to analyze the impact of each loss component. Table III

shows different combinations of losses applied and the corresponding results achieved by our model. It is evident from the table, that with just the inclusion of augmentation loss, we get significant gains over the baseline. The augmentation loss makes the model more robust to variation in brightness, contrast and image noise. Supervising the network in form of augmentation loss utilising the true augmented depth drastically improves the depth prediction at occluded regions including image boundaries. Similarly, appearance and color based transforms help the network in learning to predict more consistent and robust depth which is less affected by noise and illumination changes. We observe that adding reflection padding to our network doesn’t have noticeable effect on the depth prediction results as the augmentation loss already improves depth at image boundaries. We also observe that attention improves

more than Augmentation and the combination of both losses have a multi-fold improvement over baseline. We also tried replacing skip connections by attention module but the added complexity was drastically high with no significant improvement in depth prediction. As depicted in Table V, concatenating encoder feature to the attention gave better results than simply passing attention block’s output to the decoder. This tells us that attention though significant isn’t sufficient to achieve optimal result. Also, increasing the augmentation loss weight induced texture copy artifacts and decreasing it led to minimal improvement in accuracy.

As observed by Monodepth2[13], it is necessary to handle static frames, i.e. frames where either the camera is stationary or regions such as sky that does not change across consecutive frames. Automasking masks out these areas and prevents the model from learning erroneous depth. To enforce mask to be consistent and smooth, and eliminating noisy values, we apply L1 regularisation over inverse of our mask. This slightly increases the number pixels to be evaluated and reduces artifacts over still regions like sky. Our methods predicts superior results both qualitatively and quantitatively when compared to other self-supervised monocular depth prediction methods.

V Conclusion

We propose a self-supervised model which utilises relational self-attention for jointly learning depth and camera egomotion. The model is able to predict accurate and sharp depth estimates by incorporating data augmentation as depth supervision. Our algorithm predicts state-of-the-art depth on the KITTI bench-mark[11]. We achieve In future, we shall utilise optical flow for motion segmentation, pretrained models and semantic cues for further strengthening depth of moving objects. Architectural innovations in deep learning such as vision transformers along with cues like optical flow and semantic information present in the scene can further optimize robustness and consistency in predicting depth.


  • [1] J. L. G. Bello and M. Kim (2019)

    Deep 3d-zoom net: unsupervised learning of photo-realistic 3d-zoom

    ArXiv abs/1909.09349. Cited by: §I.
  • [2] V. Casser, S. Pirk, R. Mahjourian, and A. Angelova (2019-07) Depth prediction without the sensors: leveraging structure for unsupervised learning from monocular videos.

    Proceedings of the AAAI Conference on Artificial Intelligence

    33, pp. 8001–8008.
    External Links: ISSN 2159-5399, Document Cited by: TABLE I.
  • [3] W. Chen, Z. Fu, D. Yang, and J. Deng (2016) Single-image depth perception in the wild. External Links: 1604.03901 Cited by: §II-A.
  • [4] Y. Chen, C. Schmid, and C. Sminchisescu (2019-10) Self-supervised learning with geometric constraints in monocular video: connecting flow, depth, and camera. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). External Links: ISBN 9781728148038, Document Cited by: §II-B2, §III-B1, §IV-B, §IV-C.
  • [5] B. Cheng, I. S. Saggu, R. Shah, G. Bansal, and D. Bharadia (2020) Net: semantic-aware self-supervised depth estimation with monocular videos and synthetic data. In European Conference on Computer Vision, pp. 52–69. Cited by: §II-B2.
  • [6] D. Eigen and R. Fergus (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, Cited by: Fig. 3.
  • [7] D. Eigen, C. Puhrsch, and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pp. 2366–2374. Cited by: §I, §II-A, TABLE I, §IV-A, §IV-C, TABLE III.
  • [8] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao (2018) Deep ordinal regression network for monocular depth estimation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 2002–2011. Cited by: §I.
  • [9] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu (2019) Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154. Cited by: §II-B3.
  • [10] R. Garg, V. K. Bg, G. Carneiro, and I. Reid (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In European conference on computer vision, pp. 740–756. Cited by: §I, §II-B1.
  • [11] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: TABLE I, §IV-A, §V.
  • [12] C. Godard, O. Mac Aodha, and G. J. Brostow (2017) Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270–279. Cited by: §I, §I, §II-B1, §III-A, §III-B1, TABLE II.
  • [13] C. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow (2019) Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE international conference on computer vision, pp. 3828–3838. Cited by: §I, Fig. 3, §II-B2, §III-B1, §III-B2, TABLE I, TABLE II, §IV-B, §IV-C, §IV-D, §IV-F, TABLE IV.
  • [14] V. Guizilini, R. Ambrus, S. Pillai, A. Raventos, and A. Gaidon (2020) 3D packing for self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2485–2494. Cited by: §I.
  • [15] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu (2019) Ccnet: criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612. Cited by: §II-B3.
  • [16] L. Huynh, P. Nguyen-Ha, J. Matas, E. Rahtu, and J. Heikkilä (2020) Guiding monocular depth estimation using depth-attention volume. In European Conference on Computer Vision, pp. 581–597. Cited by: §II-B2.
  • [17] A. Johnston and G. Carneiro (2020) Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4756–4765. Cited by: §I, Fig. 3, §III-B2, TABLE I, TABLE II, §IV-C, §IV-D, TABLE IV.
  • [18] K. Karsch, C. Liu, and S. B. Kang (2014-11) Depth transfer: depth extraction from video using non-parametric sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11), pp. 2144–2158. External Links: ISSN 2160-9292, Document Cited by: §II-A.
  • [19] K. Karsch, C. Liu, and S. B. Kang (2014) Depth transfer: depth extraction from video using non-parametric sampling. PAMI. Cited by: TABLE II.
  • [20] Y. Kim and C. Yim (2020) Image dehaze method using depth map estimation network based on atmospheric scattering model. 2020 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–3. Cited by: §I.
  • [21] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab (2016) Deeper depth prediction with fully convolutional residual networks. In 3DV, Cited by: TABLE II, §IV-E.
  • [22] J. Lin, C. Gan, and S. Han (2019) Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7083–7093. Cited by: §II-B3.
  • [23] M. Liu, M. Salzmann, and X. He (2014) Discrete-continuous depth estimation from a single image. In CVPR, Cited by: TABLE II.
  • [24] Y. Liu, Y. Tai, J. Li, S. Ding, C. Wang, F. Huang, D. Li, W. Qi, and R. Ji (2019) Aurora guard: real-time face anti-spoofing via light reflection. ArXiv abs/1902.10311. Cited by: §I.
  • [25] C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, and A. Yuille (2018) Every pixel counts++: joint learning of geometry and motion with 3d holistic understanding. arXiv preprint arXiv:1810.06125. Cited by: §I, TABLE I.
  • [26] R. Mahjourian, M. Wicke, and A. Angelova (2018) Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. External Links: 1802.05522 Cited by: TABLE I.
  • [27] A. Mauri, R. Khemmar, B. Decoux, N. Ragot, R. Rossi, R. Trabelsi, R. Boutteau, J. Ertaud, and X. Savatier (2020) Deep learning for real-time 3d multi-object detection, localisation, and tracking: application to smart mobility. Sensors (Basel, Switzerland) 20. Cited by: §I.
  • [28] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, et al. (2018) Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. Cited by: §II-B3.
  • [29] J. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, and D. Lin (2019) Libra r-cnn: towards balanced learning for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 821–830. Cited by: §II-B3.
  • [30] S. Pillai, R. Ambruş, and A. Gaidon (2019) Superdepth: self-supervised, super-resolved monocular depth estimation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 9250–9256. Cited by: §III-B2.
  • [31] A. Ranjan, V. Jampani, L. Balles, K. Kim, D. Sun, J. Wulff, and M. J. Black (2019) Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 12240–12249. Cited by: §II-B2, §III-B1, TABLE I, §IV-B, §IV-C.
  • [32] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §III-A.
  • [33] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap (2017) A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427. Cited by: §II-B3.
  • [34] A. Saxena, M. Sun, and A. Y. Ng (2009-05) Make3D: learning 3d scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31 (5), pp. 824–840. External Links: ISSN 0162-8828, Document Cited by: §II-A.
  • [35] A. Saxena, M. Sun, and A. Ng (2009) Make3d: learning 3d scene structure from a single still image. PAMI. Cited by: TABLE II, §IV-A, §IV-E.
  • [36] C. Shu, K. Yu, Z. Duan, and K. Yang (2020) Feature-metric loss for self-supervised learning of depth and egomotion. In European Conference on Computer Vision, pp. 572–588. Cited by: §II-B2.
  • [37] J. P. C. Valentin, A. Kowdle, J. T. Barron, N. Wadhwa, M. Dzitsiuk, M. Schoenberg, V. Verma, A. Csaszar, E. Turner, I. Dryanovski, J. Afonso, J. Pascoal, K. Tsotsos, M. Leung, M. Schmidt, O. G. Guleryuz, S. Khamis, V. Tankovich, S. R. Fanello, S. Izadi, and C. Rhemann (2018) Depth from motion for smartphone ar. ACM Transactions on Graphics (TOG) 37, pp. 1 – 19. Cited by: §I.
  • [38] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki (2017) Sfm-net: learning of structure and motion from video. arXiv preprint arXiv:1704.07804. Cited by: §I.
  • [39] C. Wang, J. M. Buenaposada, R. Zhu, and S. Lucey (2018) Learning depth from monocular videos using direct methods. In CVPR, Cited by: TABLE II.
  • [40] C. Wang, J. Miguel Buenaposada, R. Zhu, and S. Lucey (2018) Learning depth from monocular videos using direct methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2022–2030. Cited by: §III-A, §III-B1, TABLE I.
  • [41] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803. Cited by: §II-B3.
  • [42] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §III-B1.
  • [43] J. Watson, M. Firman, G. J. Brostow, and D. Turmukhambetov (2019-10) Self-supervised monocular depth hints. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §II-B1.
  • [44] Y. Wu, S. Ying, and L. Zheng (2018) Size-to-depth: a new perspective for single image depth estimation. External Links: 1801.04461 Cited by: §II-A.
  • [45] K. Xian, J. Zhang, O. Wang, L. Mai, Z. Lin, and Z. Cao (2020) Structure-guided ranking loss for single image depth prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 611–620. Cited by: §II-B2.
  • [46] Z. Yang, P. Wang, Y. Wang, W. Xu, and R. Nevatia (2018) Lego: learning edge with geometry all at once by watching videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 225–234. Cited by: TABLE I.
  • [47] Z. Yang, P. Wang, W. Xu, L. Zhao, and R. Nevatia (2017) Unsupervised learning of geometry with edge-aware depth-normal consistency. External Links: 1711.03665 Cited by: TABLE I.
  • [48] Z. Yin and J. Shi (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992. Cited by: §I, §II-B2, TABLE I, §IV-C.
  • [49] H. Zhan, R. Garg, C. Saroj Weerasekera, K. Li, H. Agarwal, and I. Reid (2018)

    Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 340–349. Cited by: TABLE I.
  • [50] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena (2019)

    Self-attention generative adversarial networks


    International conference on machine learning

    pp. 7354–7363. Cited by: §II-B3.
  • [51] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858. Cited by: §I, §II-B2, TABLE I, TABLE II, §IV-B, §IV-E.
  • [52] Y. Zou, Z. Luo, and J. Huang (2018) DF-net: unsupervised joint learning of depth and flow using cross-task consistency. Lecture Notes in Computer Science, pp. 38–55. External Links: ISBN 9783030012281, ISSN 1611-3349, Document Cited by: §II-B2.