Depth from a single image has been of utmost importance in computer vision community with the advent of deep learning. Depth prediction provides solutions for several applications including smart mobility, smartphone AR , 3D zooming , face anti-spoofing , image dehazing , etc. Humans are able to perceive depth in the visible world by utilising cues like occlusion, texture differences, relative scale of neighbouring objects, lighting and shading variations along with object semantics.
Multi-view and stereo methods are computationally expensive and have high memory overheads. Depth from single image drastically reduces these complexities and is favourable for real-time systems. Deep learning provides the tools to predict depth from a single image by transforming the task into a learning problem[7, 8], given the ground truth depth annotations. However, capturing vast amount of ground truth data in different scenarios is a formidable task. Self-supervision for computing depth eliminates this limitation by utilising photometric warp for learning depth[12, 10].
Learning from a monocular sequence is challenging due to scale ambiguity and unknown camera pose. Thus, there’s an explicit need to compute camera egomotion[13, 48]. The necessity of joint learning for depth and egomotion means that the quality of depth is highly dependent on the correctness of camera pose. Also, static scene assumption in self-supervised learning paradigm leads to holes and aberrations in pixels belonging to a moving object in the scene. Occlusions at image boundaries makes it difficult to learn depth near boundary regions (bottom image region in a forward moving camera). Although there have been innovations in deep learning architectures[14, 17]12, 13], masking strategies [38, 51, 25, 13], there is still a huge scope of improvement to bridge the gap between self-supervised and supervised methods. This paper aims to reduce that gap by incorporating novel relational self-attention and data augmentation utilising learnt depth.
We utilise a ResNet18 encoder for our ablation and quantitative analysis and show substantial improvements in learning depth. Our main contributions are as follows:
We introduce data augmentation as a supervisory loss, improving depth at occluded edges and image boundaries while making the model more robust to illumination changes and image noise.
Our self attention module learns optimal feature relations that drastically improve our depth prediction.
We show that our novel progressive learning strategy learns robust scale-invariant features leading to significant improvements in depth prediction while saving huge computational overhead of training a high resolution model from scratch.
Our network can predict state-of-the-art depth while having significantly lower number of parameters.
Ii Related Work
Depth estimation from a single colored image is a challenging task due to the obscure nature of this problem. A single depth map can be mapped to innumerable possible colored images. Over the last few years learning models have proven to be successful in effectively learning and exploiting this relationship between color images and their corresponding depths.
Ii-a Supervised Depth Estimation
was one of the first ones to explore end to end supervised learning of depth from a single colored image using a multi-scale deep neural network. He trained a model to learn directly from raw colored images and their corresponding depths. Several different approaches have been proposed since then. introduced a patch-based model which generated super-pixels to combine local information.  used a non-parametric scene sampling pipeline where candidate images from the dataset were matched with target image using high level image and optical flow features.
Acquiring large amounts of ground truth data in the real world is a challenge and this creates large overheads, both in terms of cost and time as it requires use of lasers like LIDAR. This is the reason that supervised models, despite their superior performance, are not universally applicable. As a result several works have turned to unsupervised or weakly supervised models and use of synthetically generated data.
 used real world size of objects to compute depth maps. They used geometric relations to calculate depth maps which were then refined using energy function optimization.  used relative depth annotation instead of actual ground truth depth data. They learned to estimate metric depth using relative depth annotations. These works however, still require supervision signals in the form of additional set of depths or other annotations. Generating large amounts of realistic synthetic data that includes several types of variations found in the real world is not a superficial task as well.
Ii-B Self-supervised Depth Estimation
A more promising substitute for supervised and weakly supervised models is the self supervised approach. Either stereo or monocular inputs are used for these models. Depth, hallucinated by the model, is used to warp the source image into the target frame. The difference between the reconstructed and reference frame is penalised and added as a reconstruction loss to provide a supervisory signal to the model.
Ii-B1 Self-supervised Stereo Training
For self-supervised stereo depth estimation, synchronized stereo image pairs are fed into the model. The model estimates disparity or inverse depth between the two frames and in the process learns to predict the depth of single images. Garg  presented an approach that reconstructed left images by inverse warping the right images using the predicted depth and known camera extrinsics. The photometric error between the reconstructed image and the original images was used to train the encoder.  incorporated a left-right consistency term amongst other losses.  utilised stereo matching to provide sparse supervision in form of depth hints to predict depth. Since then several works have refined self-supervised stereo training of depth. However, some problems still plague stereo estimation. Occlusion drastically affects stereo frames due to the fixed baseline between cameras. Also, wide baseline stereo data might not be available in all real world scenarios e.g. mobile phone camera.
Ii-B2 Self-supervised Monocular Training
Self supervised monocular depth estimation is naturally unimpeded by a lot of these restraints. In monocular training, temporally consecutive frames are fed into the model instead of stereo pairs. The model has to also learn pose in addition to depth due to the unknown and varying baseline. Zhou et al provided one of the initial works in this domain where they used an end to end learning approach with supervision provided by view synthesis. They used two separate networks for learning depth and pose.  used a minimum reprojection loss to handle occlusion and prevent the network from learning erroneously from occluded pixels. They computed an automasking framework to prevent learning depth from stationary pixels(static camera). Several works have also incorporated optical flow estimation in their pipelines and tried to exploit relationships between depth, pose and optical flow to achieve more accurate results.  proposed a cross-task consistency loss,  performed motion segmentation,  decomposed motion into rigid and non rigid components and used a residual flow learning module to handle non rigid cases,  used losses that ensured 3D structural consistency and enforced geometric constraints, Net fuses semantic constraints into depth framework, Shu introduces a feature metric loss computed from FeatureNet to improve depth. Huynh  formulates a depth attention volume for guiding monocular depth. Xian  constructs a structure guided ranking loss for self-supervised learning of depth.
Ii-B3 Self-Attention in Deep learning
Wang introduced self-attention as a non-local operation by correlating response at a spatial position as weighted sum of features at all positions. Building on the same framework, Zhang utilised self-attention in GANs for image generation tasks. Fu formulated a dual attention network for semantic segmentation that unlike traditional works which focus on multi-scale feature fusion, focused on self-attention to integrate local features with their global dependencies adaptively. Since then, self-attention has already been utilised in medical applications, video recognition, semantic segmentation, object detection and video understanding. Unlike a convolutional operation, self-attention provides the ability to learn features and dependencies in non-contiguous regions making it an important building block of deep learning frameworks. We formulate a relational self-attention mechanism, learning from relational reasoning to embed better context in the self-attention framework. Our model achieves better accuracy without learning for optical flow or motion segmentation by encompassing robust geometric constraints, a relational self-attention framework and utilising augmentation for depth supervision along with our progressive learning strategy.
Self-supervised learning utilising photometric consistency has become the de-facto standard for learning depth without ground truth data. The problem of depth prediction is transformed into a problem of view synthesis, where the goal is to use predicted depth of the input image to find per pixel correspondence for reconstructing the input image from another view. By solving for view synthesis, we can train our network to predict depth. We utilise the same approach while incorporating multiple novel data-driven and geometric constraints. Here, we describe a model that jointly learns to predict depth and pose. The network comprises of a shared VGG encoder, depth decoder and pose sub-network. The encoder takes an RGB image as input and extracts it’s features that are utilised by both depth decoder and pose sub-network. For training our network we use a 3 frame sequence, where the middle frame is target image and the remaining two frames are source images , . We predict target depth , source depth and , pose and pose , where pose is the 6DoF transformation from target to the source.
We first outline our training model architecture along with the necessary notations required in formulating losses for training our model then describe in detail the geometric constraints of depth prediction. We describe in detail the augmentation loss framework and the self-attention module for and then delineate each loss along with it’s significance in our algorithm.
Iii-a Training Model Architecture
taking an RGB image as input. Features extracted from the sourceand target
images are concatenated and fed to the pose sub-network to compute the 6x1 egomotion vector. Our depth decoder takes in feature of the target imageto predict depth of that image. The encoder-decoder framework is similar to the U-Net architecture introduced by , that enables us to encapsulate both global as well as local features while predicting depth at 4 scales. The relational attention module takes input as encoder’s features and generates attention maps that are concatenated to the original features and fed to the depth decoder as in Figure2. The pose network comprises of 4 convolutional layers to get a 6x1 output vector containing rotation(3x1) and translation (3x1) information as shown in Figure2. We use Sigmoid activation at depth outputs and ELU activation everywhere else. The target image and it’s corresponding predicted depth is then processed by the augmentation pipeline to get transformed augmented image and true augmented depth . is then fed to the network to predict output augmented depth . The model returns for computing training losses. Target depth warps source image to compute synthetic image by using bi-linear sampling for sampling source images. While testing, the network can simply compute from .
|Method||Abs Rel||Sq Rel||RMSE||RMSE log||<||<||<|
|Zhou et al||0.183||1.595||6.709||0.270||0.734||0.902||0.959|
|Yang et al||0.182||1.481||6.501||0.267||0.725||0.906||0.963|
|Mahjourian et al||0.163||1.240||6.220||0.250||0.762||0.916||0.968|
|Ranjan et al||0.148||0.149||5.464||0.226||0.815||0.935||0.973|
Iii-B Constraints for depth prediction
In this section, we describe the formulation of various loss functions used in our network for self-supervised learning of depth and pose.
Iii-B1 Minimum Photometric Loss
As described by , this loss is a slight variation from the normal photometric loss. Instead of taking per pixel average of photometric loss for all sources, we compute minimum of photometric loss for all sources. This successfully tackles scenarios where a target pixel is visible in one source image but not visible in the other source image due to occlusion and only back-propagates the minimum error, thereby ignoring the erroneous one.
,where Similar to , we apply a per pixel binary mask to the computed losses. The mask is generated by comparing the photometric error between source and target frames with that between the synthesised source and target frames.
This eliminates static pixels from corrupting the loss and the network skips learning depth altogether if the camera isn’t moving. We observe that although this improves depth prediction drastically, it leads to random white noise around static regions and makes the learning of depth more sensitive to noisy images. This happens because the mask doesn’t consider neighbouring pixels while comparing photometric errors and simply takes a threshold of per pixel values. To alleviate this problem, we enforce a L1 regularisation over inverse of, thereby motivating the mask to be positive for those sparse number of pixels.
Iii-B2 Data augmentation for depth supervision
Several works have utilised data augmentation[13, 30, 17] in their deep learning pipeline to make their networks more robust to challenging scenarios and invariant to changes in noise, brightness and contrast that are common in the real world. Traditionally, pipelines performed data augmentation at the data loading stage and the augmented data was fed into the network for training. We utilise augmentation for generating augmented inputs and outputs that are used to train the network in a semi-supervised manner. We incorporate an augmentation loss, in the form of a depth supervision, that improves the predicted depth. While training, in the first forward pass, the network takes as input, giving depth as network output. We pass the pair
to the augmentation pipeline, applying identical random image cropping, flipping, skewing and scaling and affine transformations to both. Additionally, we perform random changes to brightness, jitter, gamma and saturation to the input image. We also add random gaussian noise to . The augmentation pipeline returns augmented image and true augmented depth . While training, in the second forward pass, is fed to the network generating augmented predicted depth . Augmented depth maps generated in first pass serve as ground truth for depth maps generated in the second pass. The augmentation loss minimises the difference between the output augmented depth and the true augmented depth, enforcing both depths to be consistent with each other.
Due to camera egomotion, occlusion is present at certain image boundaries. Rescaling and crop transformations randomly remove boundary regions from the image, while ensuring that it’s size remains the same. Thus, the boundaries of the augmented depth are more accurate due to lower probability of occlusion.
Iii-B3 Relational Self-Attention
The relational self-attention block takes input the features from the ResNet18 encoder and computes self-attention
that is added as a residual connection to the input featureto compute the output feature. The operation can be summarised as follows:
Here, is a weight factor that projects the concatenated vector to the scalar by performing a convolution with single output channel, denotes concatenation, defines the number of positions in . Also, the functions , and are defined by 2D convolution operations as shown in Figure 2. The input generates the projection, query, key and value embeddings as
where , , , are weight matrices to be learnt. The pairwise relation between query and key is projected by and multiplied by value to compute our relational self-attention that is then element-wise added to input to give output of our attention block.
The output is then concatenated with the encoder’s features and utilised by the decoder to computed multi-scale depth.
Iii-B4 Final Training Loss
We combine photometric and smoothness losses with our data augmentation loss along with regularization over mask to obtain our final objective.
All our losses are computed per-pixel and averaged over entire image, scales and batch.
|Type||Abs Rel||Sq Rel||RMSE|
Iv Experiments and Results
This section introduces the dataset and describes the training details. We describe in detail various comparative qualitative and quantitative studies along with an ablation study undertaken for validation and show that our method surpasses all other existing related methods.
|Aug Loss||Attention||Abs Rel||Sq Rel||RMSE||RMSE log||<||<||<|
|Method||Backbone||Abs Rel||Sq Rel||RMSE||RMSE log||<||<||<|
|Proposed Approach 1024x384||ResNet18||0.108||0.745||4.436||0.181||0.889||0.966||0.984|
|Depth Decoder’s Input||Abs Rel||Sq Rel||RMSE||RMSE log||<||<||<|
|Attention + No feature concat||0.113||0.879||4.777||0.190||0.880||0.959||0.981|
|Attention in all skip connections + Feature concat||0.112||0.856||4.699||0.188||0.880||0.961||0.982|
|Attention + feature concat in all skip connections||0.112||0.866||4.742||0.189||0.879||0.960||0.982|
|Attention + Feature concat||0.111||0.817||4.685||0.188||0.883||0.961||0.982|
Our model was trained on KITTI 2015 dataset. This dataset comprises videos captured by a camera mounted on a car moving through the German city of Karlsruhe and is widely recognized and often used for tasks like estimation of depth, optical flow and car’s egomotion. We used the Eigen test split of this dataset and tested our model using the ground truth labels present in it. The test set consists of 697 images and it is ensured that frames that are similar to those present in the test set are removed from the training set. We also test our trained model on the 134 images in Make3D dataset.
Iv-B Parameter Settings
, we use ImageNet weights for initialising our network and train our model using a single NVIDIA 2080Ti GPU. Three temporally consecutive images are fed into the model and Adam optimizer withand is used. Initial learning rate is set to and batch size to 12. While optimizing our network we set weights of different loss terms to and . While preparing training data, static frames are removed from the dataset as proposed by Zhou et al
. Basic augmentation in the form of random cropping, color jittering, resizing and flipping is also performed as part of our data preparation pipeline. We train our network over two phases in a progressive manner. In the first phase, images of 640x192 resolution are fed into the network. After training it for 50 epochs, in the second phase, we freeze the pose encoder and feed the higher resolution 1080x384 images to the depth network and train our model for 5 epochs with batch size 2. Progressive training aids in further improvement and faster convergence in our depth prediction model at a higher resolution as shown in TableIV.
Iv-C Main Results
We compare our results with other recent models in Table I. These results show that our monocular model is able to comprehensively outperform all existing state of the art self-supervised monocular methods. Our model is even able to surpass methods that incorporate optical flow prediction into their pipeline, ,, while having lower number training parameters. During evaluation, as common practice, we cap depth to 80m. Table IV shows the comparison with DDV and moonodepth2 trained using features from same ResNet18 encoder. Our method has better RMSE, Sq Rel, Abs Rel than other similar methods. This shows that our model performs significantly better in all metrics on Eigen split of KITTI 2015 dataset.
Iv-D Qualitative Analysis
Figure 3 displays qualitative improvements in our method over baseline Monodepth2(MD2) and DDV. Our algorithm retains structural details in objects like poles, sign boards and trees while learning smooth depth over entire scene. We also have the least noise in disparity values of the infinitely distant sky.
Table II shows results of our model that is trained on KITTI dataset and tested on the Make3D dataset. We use the crop defined by  and apply depth median scaling for fair comparison. The table shows our method’s superior performance than other self-supervised methods while bridging the gap between supervised ones.
Iv-F Ablation Study of Losses
We also undertake exhaustive quantitative comparison of all the losses to analyze the impact of each loss component. Table III
shows different combinations of losses applied and the corresponding results achieved by our model. It is evident from the table, that with just the inclusion of augmentation loss, we get significant gains over the baseline. The augmentation loss makes the model more robust to variation in brightness, contrast and image noise. Supervising the network in form of augmentation loss utilising the true augmented depth drastically improves the depth prediction at occluded regions including image boundaries. Similarly, appearance and color based transforms help the network in learning to predict more consistent and robust depth which is less affected by noise and illumination changes. We observe that adding reflection padding to our network doesn’t have noticeable effect on the depth prediction results as the augmentation loss already improves depth at image boundaries. We also observe that attention improvesmore than Augmentation and the combination of both losses have a multi-fold improvement over baseline. We also tried replacing skip connections by attention module but the added complexity was drastically high with no significant improvement in depth prediction. As depicted in Table V, concatenating encoder feature to the attention gave better results than simply passing attention block’s output to the decoder. This tells us that attention though significant isn’t sufficient to achieve optimal result. Also, increasing the augmentation loss weight induced texture copy artifacts and decreasing it led to minimal improvement in accuracy.
As observed by Monodepth2, it is necessary to handle static frames, i.e. frames where either the camera is stationary or regions such as sky that does not change across consecutive frames. Automasking masks out these areas and prevents the model from learning erroneous depth. To enforce mask to be consistent and smooth, and eliminating noisy values, we apply L1 regularisation over inverse of our mask. This slightly increases the number pixels to be evaluated and reduces artifacts over still regions like sky. Our methods predicts superior results both qualitatively and quantitatively when compared to other self-supervised monocular depth prediction methods.
We propose a self-supervised model which utilises relational self-attention for jointly learning depth and camera egomotion. The model is able to predict accurate and sharp depth estimates by incorporating data augmentation as depth supervision. Our algorithm predicts state-of-the-art depth on the KITTI bench-mark. We achieve In future, we shall utilise optical flow for motion segmentation, pretrained models and semantic cues for further strengthening depth of moving objects. Architectural innovations in deep learning such as vision transformers along with cues like optical flow and semantic information present in the scene can further optimize robustness and consistency in predicting depth.
Deep 3d-zoom net: unsupervised learning of photo-realistic 3d-zoom. ArXiv abs/1909.09349. Cited by: §I.
Depth prediction without the sensors: leveraging structure for unsupervised learning from monocular videos.
Proceedings of the AAAI Conference on Artificial Intelligence33, pp. 8001–8008. External Links: Cited by: TABLE I.
-  (2016) Single-image depth perception in the wild. External Links: Cited by: §II-A.
-  (2019-10) Self-supervised learning with geometric constraints in monocular video: connecting flow, depth, and camera. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). External Links: Cited by: §II-B2, §III-B1, §IV-B, §IV-C.
-  (2020) Net: semantic-aware self-supervised depth estimation with monocular videos and synthetic data. In European Conference on Computer Vision, pp. 52–69. Cited by: §II-B2.
-  (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, Cited by: Fig. 3.
-  (2014) Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pp. 2366–2374. Cited by: §I, §II-A, TABLE I, §IV-A, §IV-C, TABLE III.
Deep ordinal regression network for monocular depth estimation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2002–2011. Cited by: §I.
-  (2019) Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154. Cited by: §II-B3.
-  (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In European conference on computer vision, pp. 740–756. Cited by: §I, §II-B1.
-  (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: TABLE I, §IV-A, §V.
-  (2017) Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270–279. Cited by: §I, §I, §II-B1, §III-A, §III-B1, TABLE II.
-  (2019) Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE international conference on computer vision, pp. 3828–3838. Cited by: §I, Fig. 3, §II-B2, §III-B1, §III-B2, TABLE I, TABLE II, §IV-B, §IV-C, §IV-D, §IV-F, TABLE IV.
-  (2020) 3D packing for self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2485–2494. Cited by: §I.
-  (2019) Ccnet: criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612. Cited by: §II-B3.
-  (2020) Guiding monocular depth estimation using depth-attention volume. In European Conference on Computer Vision, pp. 581–597. Cited by: §II-B2.
-  (2020) Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4756–4765. Cited by: §I, Fig. 3, §III-B2, TABLE I, TABLE II, §IV-C, §IV-D, TABLE IV.
-  (2014-11) Depth transfer: depth extraction from video using non-parametric sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11), pp. 2144–2158. External Links: Cited by: §II-A.
-  (2014) Depth transfer: depth extraction from video using non-parametric sampling. PAMI. Cited by: TABLE II.
-  (2020) Image dehaze method using depth map estimation network based on atmospheric scattering model. 2020 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–3. Cited by: §I.
-  (2016) Deeper depth prediction with fully convolutional residual networks. In 3DV, Cited by: TABLE II, §IV-E.
-  (2019) Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7083–7093. Cited by: §II-B3.
-  (2014) Discrete-continuous depth estimation from a single image. In CVPR, Cited by: TABLE II.
-  (2019) Aurora guard: real-time face anti-spoofing via light reflection. ArXiv abs/1902.10311. Cited by: §I.
-  (2018) Every pixel counts++: joint learning of geometry and motion with 3d holistic understanding. arXiv preprint arXiv:1810.06125. Cited by: §I, TABLE I.
-  (2018) Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. External Links: Cited by: TABLE I.
-  (2020) Deep learning for real-time 3d multi-object detection, localisation, and tracking: application to smart mobility. Sensors (Basel, Switzerland) 20. Cited by: §I.
-  (2018) Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. Cited by: §II-B3.
-  (2019) Libra r-cnn: towards balanced learning for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 821–830. Cited by: §II-B3.
-  (2019) Superdepth: self-supervised, super-resolved monocular depth estimation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 9250–9256. Cited by: §III-B2.
-  (2019) Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 12240–12249. Cited by: §II-B2, §III-B1, TABLE I, §IV-B, §IV-C.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §III-A.
-  (2017) A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427. Cited by: §II-B3.
-  (2009-05) Make3D: learning 3d scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31 (5), pp. 824–840. External Links: Cited by: §II-A.
-  (2009) Make3d: learning 3d scene structure from a single still image. PAMI. Cited by: TABLE II, §IV-A, §IV-E.
-  (2020) Feature-metric loss for self-supervised learning of depth and egomotion. In European Conference on Computer Vision, pp. 572–588. Cited by: §II-B2.
-  (2018) Depth from motion for smartphone ar. ACM Transactions on Graphics (TOG) 37, pp. 1 – 19. Cited by: §I.
-  (2017) Sfm-net: learning of structure and motion from video. arXiv preprint arXiv:1704.07804. Cited by: §I.
-  (2018) Learning depth from monocular videos using direct methods. In CVPR, Cited by: TABLE II.
-  (2018) Learning depth from monocular videos using direct methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2022–2030. Cited by: §III-A, §III-B1, TABLE I.
-  (2018) Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803. Cited by: §II-B3.
-  (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §III-B1.
-  (2019-10) Self-supervised monocular depth hints. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §II-B1.
-  (2018) Size-to-depth: a new perspective for single image depth estimation. External Links: Cited by: §II-A.
-  (2020) Structure-guided ranking loss for single image depth prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 611–620. Cited by: §II-B2.
-  (2018) Lego: learning edge with geometry all at once by watching videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 225–234. Cited by: TABLE I.
-  (2017) Unsupervised learning of geometry with edge-aware depth-normal consistency. External Links: Cited by: TABLE I.
-  (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992. Cited by: §I, §II-B2, TABLE I, §IV-C.
Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 340–349. Cited by: TABLE I.
Self-attention generative adversarial networks. In
International conference on machine learning, pp. 7354–7363. Cited by: §II-B3.
-  (2017) Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858. Cited by: §I, §II-B2, TABLE I, TABLE II, §IV-B, §IV-E.
-  (2018) DF-net: unsupervised joint learning of depth and flow using cross-task consistency. Lecture Notes in Computer Science, pp. 38–55. External Links: Cited by: §II-B2.