Log In Sign Up

S^3Net: Semantic-Aware Self-supervised Depth Estimation with Monocular Videos and Synthetic Data

Solving depth estimation with monocular cameras enables the possibility of widespread use of cameras as low-cost depth estimation sensors in applications such as autonomous driving and robotics. However, learning such a scalable depth estimation model would require a lot of labeled data which is expensive to collect. There are two popular existing approaches which do not require annotated depth maps: (i) using labeled synthetic and unlabeled real data in an adversarial framework to predict more accurate depth, and (ii) unsupervised models which exploit geometric structure across space and time in monocular video frames. Ideally, we would like to leverage features provided by both approaches as they complement each other; however, existing methods do not adequately exploit these additive benefits. We present S^3Net, a self-supervised framework which combines these complementary features: we use synthetic and real-world images for training while exploiting geometric, temporal, as well as semantic constraints. Our novel consolidated architecture provides a new state-of-the-art in self-supervised depth estimation using monocular videos. We present a unique way to train this self-supervised framework, and achieve (i) more than 15% improvement over previous synthetic supervised approaches that use domain adaptation and (ii) more than 10% improvement over previous self-supervised approaches which exploit geometric constraints from the real data.


page 12

page 13


Bootstrapped Self-Supervised Training with Monocular Video for Semantic Segmentation and Depth Estimation

For a robot deployed in the world, it is desirable to have the ability o...

Don't Forget The Past: Recurrent Depth Estimation from Monocular Video

Autonomous cars need continuously updated depth information. Thus far, t...

BiFuse++: Self-supervised and Efficient Bi-projection Fusion for 360 Depth Estimation

Due to the rise of spherical cameras, monocular 360 depth estimation bec...

Learning a Geometric Representation for Data-Efficient Depth Estimation via Gradient Field and Contrastive Loss

Estimating a depth map from a single RGB image has been investigated wid...

Self-supervised Depth Estimation Leveraging Global Perception and Geometric Smoothness Using On-board Videos

Self-supervised depth estimation has drawn much attention in recent year...

Image Masking for Robust Self-Supervised Monocular Depth Estimation

Self-supervised monocular depth estimation is a salient task for 3D scen...

DESC: Domain Adaptation for Depth Estimation via Semantic Consistency

Accurate real depth annotations are difficult to acquire, needing the us...

1 Introduction

Depth estimation is a fundamental component of 3D scene understanding, with applications in fields such as autonomous driving, robotics and space exploration. There has been considerable progress in estimating depth through monocular camera images in the last few years, as monocular cameras are inexpensive and widely deployed on many robots. However, building supervised depth estimation algorithms using monocular cameras is challenging, primarily because collecting ground-truth depth maps for training requires a carefully calibrated setup. As an example, many vehicles currently sold in the market have monocular cameras deployed, but there is no trivial way to obtain ground-truth depth information from the images collected from these cameras. Thus, supervised methods for depth estimation suffer due to the unavailability of extensive training labels.

To overcome the lack of depth annotation for monocular camera data, existing work has explored two areas of research: either designing self-supervised/semi-supervised approaches which require minimal labeling, or leveraging labeled synthetic data. Most self-supervised approaches rely on geometric and spatial constraints [44], and have succeeded in reducing the impact of this issue, however they don’t always perform well in challenging environments with conditions like limited visibility, object motion, etc. This is because they lack strong training signal from supervision which lets them learn from and generalize to these conditions. In contrast, some effort has been undertaken to use realistic simulated environments to obtain additional synthetic depth data which can be used to compute a supervised training loss.

Synthetic data can be easily generated in different settings with depth labels - for example by varying the lighting conditions, changing the weather, varying object motion, etc. Simply training the original model on synthetic data, however, does not work well in practice as the model does not generalize well to a real-world dataset. To bridge this domain gap between the real-world and synthetic datasets, many domain adaptation techniques have been proposed. Recent works, like [46, 27]

, have found success in using adversarial approaches to address this issue. These solutions typically involve using an adversarial transformation network to align the domains of the synthetic and real-world images, followed by a task network that is responsible for predicting depth. Naturally, we pose the question - can we build a depth estimation network that combines the benefits of information conveyed through real as well as synthetic data?

We present a novel framework Net that trains the depth network by exploiting these self-supervised constraints (derived from real-world sequential images) and supervised constraints (derived from synthetic data and the respective ground-truth depths). This framework is implemented through several integrated stages which are described below: First, as shown in Fig. 1

, we present a novel Generative Adversarial Network (GAN)-based domain adaptation network which exploits geometric constraints across space and time, as well as the semantic consistency between original synthetic images and translated images. These constraints encode additional latent information and thus enhance the quality of domain adaptation. Next, to leverage

‘synthetic’ supervised cues and ‘real’ self-supervised cues, we present a novel training approach - weights of the depth estimation network are updated alternatively based on the supervised and self-supervised losses. Finally, to impose explicit constraints on object geometry we augment the input RGB images with semantic labels, and utilize a bi-directional auto-masking technique to limit pixels which would violate rigid motion constraints.

Figure 1: Overview of our proposed framework: integrating supervised learning on translated synthetic data and self-supervised learning on real videos while imposing spatial, temporal and semantic constraints

Novel Adversarial Framework: The key idea of our GAN structure is to utilize flow-based photometric consistency and semantic consistency to better guide the image translation and reduce the domain gap. By utilizing the flow and the sequential translated images, the frame at can be used to reconstruct the frame at . The photometric differences between the reconstructed frame and the original frame are primarily due to imperfect image translation. Moreover, the semantic information should remain consistent before and after the image translation. Therefore, we add both a photometric consistency and a semantic consistency loss to create a novel adversarial framework. These offer additional constraints on the domain adaptation and further improve the image translation performance. They also help increase robustness and reduce undesired artifacts in translated images when compared with traditional approaches, as described in Section 4.2.

Semantics and Bi-directional Auto-Masking: Inspired by the auto-masking technique proposed in [16], we propose a novel bi-directional auto-masking technique for sequential real-world images, which can filter out the pixels violating the fundamental rigid motion assumption for self-supervised depth learning. The key difference from a single direction mask is that the bi-directional technique fuses the masks learned by reconstructing frame from frame and vice versa, which can substantially increases the accuracy of the proposed mask. Moreover, we augment the input images of our model with semantic labels. The semantic labels can provide explicit geometry constraints, which can be beneficial to further boost the performance of the image translation and the depth estimation.

The challenges of training our depth model fall under two major categories: the GAN networks are unstable during training due to the presence of supervised synthetic losses and self-supervised losses, which results in lack of convergence. We address the convergence issue by proposing a two-phase training strategy. In the first phase, we train the image translation network and depth estimation network with synthetic supervised losses to stabilize the GAN-based image translation network. In the second phase, we freeze the weights of the image translation network and further train the depth estimation network with both supervised and self-supervised losses.

We evaluate our framework on two challenging datasets, i.e., KITTI [26] and Make3D [35]

. The evaluation results show that our proposed model can outperform the state-of-the-art approaches in all evaluated metrics. In particular, we show that

Net can outperform both the state-of-the-art synthetic supervised domain adaptation approaches [27] by and self-supervised approaches [16] by . Moreover, we only require the depth estimation network during inference, so our inference compute requirements are comparable to previous state-of-the-art approaches.

2 Related Work

Monocular depth estimation is often considered an ill-posed problem, since a single 2D image can be produced from an infinite number of distinct 3D scenes. Without additional constraints, it is challenging to predict depths correctly for a given image. Previous works address this challenge in two ways: (i) supervised depth estimation trained with ground-truth depths; (ii) self-supervised depth estimation that learns indirect depth cues from sequential images.

2.1 Supervised Depth Estimation

Eigen et al[10] proposed the first supervised learning architecture that models multi-scale information for pixel-level depth estimation using direct regression. Inspired by this work, many follow-up works have extended supervised depth estimation in various directions [19, 31, 22, 41, 4, 32, 8, 45, 40, 42, 11, 39]. However, acquiring these ground-truth depths is prohibitively expensive. Therefore, it is unlikely to obtain a large amount of labelled training data covering various road conditions, weather conditions, etc, which indicates that these approaches may not generalize well.

One promising approach that reduces the labeling cost is to use synthetically generated data. However, models trained on synthetic data typically perform poorly on real-world images due to a large domain gap. Domain adaptation aims to minimize this gap. Recently, GAN-based approaches show promising performance in domain adaptation [13, 14, 33, 3, 34]. Atapour et al[1] proposed a CycleGAN-based translator [51] to translate real-world images into the synthetic domain, and then train a depth prediction network using the synthetic labeled data. Zheng et al[47] propose a novel architecture (Net) where the style translator and the depth estimation network are optimized jointly so that they can improve each other. Despite promising performance, these approaches inherently suffer from mode collapse and semantic distortion due to imperfect synthetic-to-real image translation. Various constraints and techniques have been proposed to improve the quality of the translated images, but image translation 222“domain adaptation” and “image translation” are used interchangeably still remains a challenging task.

2.2 Self-supervised Depth Estimation

In addition to supervised solutions, various approaches have been studied to predict depths by extracting disparity and depth cues from stereo image pairs or monocular videos. Garg et al[15] introduced a warping loss based on Taylor expansion. An image reconstruction loss with a spatial smoothness constraint was introduced in [30, 49, 20] to learn depth and camera motion. Recent works [36, 50, 24, 17, 16] aim to improve depth estimation by further exploiting geometry constraints. In particular, Godard et al[17] employed epipolar geometry constraints between stereo image pairs and enforced a left-right consistency constraint in training the network. Zhou et al[49] proposed a network to learn pose and depth from videos by introducing a photometric consistency loss while only relying on monocular videos for training. Yin et al.[44] proposed GeoNet, which also used depth and pose networks in order to compute rigid flow between sequential images in a video. More specifically, they introduced a temporal, flow-based photometric loss to predict depth for monocular videos in an unsupervised setting. Bian et al[2] used a similar approach along with a self discovered mask to handle dynamic and occluded objects. Gordon et al[18] also addresses these issues in a purely geometric approach. Casser et al[5] adapts a similar framework with an additional online refinement model during inference. Xu et al[43] uses region deformer networks along with the earlier constraints to handle rigid and non-rigid motion. Zhou et al[48] use a dual network attention based model which processes low and high resolution images separately. Godard et al[16] also presented another unsupervised approach which built on their earlier model [17] by modifying the implementation of the unsupervised constraints.

Another set of recently adopted approaches involves using semantic information, which provides additional constraints on object geometry that can potentially boost the accuracy of depth estimation [25, 28, 7, 29]. Meng et al[25] built on top of [44], and proposed several ways to implement a semantic aided network which helped improve performance. Ranjan et al[29] used a competitive collaboration framework to leverage segmentation maps, pose and flow for depth estimation. However, even with various constraints, the self-supervised approaches predict depth primarily based on indirect and weak-supervision depth cues, which can be easily affected by undesired artifacts, such as motion blurring and low visibility.

Our model architecture is influenced by various previous work, e.g. approaches in [16, 47, 27, 9]. But, compared to these approaches, our Net cooperatively combines both supervised depth prediction on synthetic data and self-supervised depth prediction on sequential images, such that the two strategies can complement each other in a mutually beneficial setting.

3 Proposed Methods

We propose a joint framework for monocular depth estimation that is trained on translated synthetic images in a supervised manner and further fine-tuned on sequences of real-world images in a self-supervised fashion. Our proposed framework can be broken down into two main components: a) Synthetic-to-Real Translation and Task Prediction (, , ), and, b) View-synthesis guided self-supervised fine-tuning (Pose, )

Figure 2: Our detailed architecture, a) Semantic and photometric consistent GAN for synthetic supervised depth estimation, b) self-supervised architecture trained on sequence of real-world images with warping view-synthesis loss.

3.1 Novel GAN Architecture

Models trained on synthetic data do not generalize well to real-world data because of domain shift. To address this problem, we build upon the work of Net  [46] for supervised depth estimation on translated synthetic images.

3.1.1 Adversarial Constraints

The goal of our generator is to translate a synthetic image to the real domain . To achieve this, a discriminator and a transformer architecture are trained jointly such that discriminator tries to predict if the image is real or synthetic. This accounts for our GAN loss, as shown in Fig. 2:


3.1.2 Identity Constraints

To improve the quality of translated images, Net imposes a identity constraint such that if a real image is given as input, the generator network ()’s output should be the identical real image . This additional constraint is incorporated as an identity loss in Fig. 2:


3.1.3 Semantic Consistency

While the identity constraint improves upon a vanilla GAN architecture, we observe that the translated images had artifacts as shown in Fig. 4, which cause imperfect domain translation and subsequently hurt depth prediction. To address this we introduce a semantic consistency loss (Fig. 2) based on the idea that given a semantic segmentation model trained on the source domain, and should have identical semantic segmentation maps. This is intuitive as domain translation shouldn’t affect the semantic structure of the image. We enforce this by treating as a ground truth label for pixel-wise prediction scores

. These are used to compute a cross-entropy loss function over semantic labels:


However, because of domain shift we cannot expect trained on synthetic images to generalize well to the translated image domain, and hence we also continue training while training our GAN architecture so that it can learn features that are generalized to both domains.

3.1.4 Photometric Consistency with Ground-truth Flow

In addition to semantic constraints we introduce a flow guided photometric loss[44] to exploit the temporal structure in translated image sequences. By applying ground-truth flow, a frame can be used to reconstructed the frame . We represent this as a transformation . In Eq. 4 below, represents the translated image of a synthetic frame at time and indicates the reconstructed frame based on frame . computes the photometric differences between the reconstructed frame and the true frame. This photometric loss provides an indirect supervision on the synthetic-to-real image translation.


Incorporating the above constraints in our GAN framework results in an improved domain translator that is largely devoid of artifacts and preserves semantic structure as shown in Fig. 4.

3.2 Combining Supervised and Self-supervised Depth Estimation

3.2.1 Supervised Depth Estimation on Synthetic Data

With the ground-truth depth labels for synthetic data, we formulate the depth estimation on synthetic data as a regression problem. In Eq. 5 below, is the estimated depth map for frame and is the correspond ground-truth label for synthetic frame .


In accordance with our base network (Net) we add an edge consistency/awareness loss which penalizes discontinuity (or inconsistency) in the edges between the image and its depth map .


Our training is divided into two phases. In the first phase, we train a GAN-based image transfer network (( and )) and a depth estimation network (). A detailed explanation of our training methodology follows in Section . The first loss objective is a weighted combination of the above constraints (Sections 3.1, 3.2):


where , , , , and are hyper-parameters.

3.2.2 Self-supervised Depth Estimation on Monocular Videos

In addition to supervised depth prediction on translated synthetic images, we also perform self-supervised depth estimation on monocular videos. The corresponding pixel coordinates of one rigid object in two consecutive frames follows the relationship


where and are the positions of a pixel in frames at time and , denotes the camera intrinsic parameters, represents the relative camera pose from frame to from and is the equivalent warping transformation. By sampling these pixels at in frame , one can construct frame from frame . The photometric difference (denoted by ) between the constructed image and the true image of frame provides self-supervision for both the depth network (

) and pose estimation networks (Fig.



3.2.3 Bi-directional Auto-Masking

Inspired by the auto-masking method proposed in [16], we compute the photometric loss for different sequential image pairs, e.g., from frame to frame and from frame to frame , and then aggregate these photometric losses by extracting their minimum value, i.e., . Additionally, the pixels satisfying

are selected for further loss computation. It is because the discarded pixels are more likely to belong to moving objects with a similar moving speed as the moving camera, or stationary objects captured by a stationary camera. For a more complete loss computation, we consider a bi-directional warping transformation, i.e., from frame to frame as well as from frame to frame .


In the second phase of our training, we train the depth and pose networks ( and ) with a combination of sequential real images and GAN translated synthetic images. Our total loss objective which includes training the depth network with supervised loss from synthetic data, can be written as:


where , and are hyper-parameters. More details regarding our training strategy follow in Section .

3.3 Semantic Augmentation

Semantic labels provide important information about object shape and geometry. We believe such information helps improve the accuracy of depth estimation by imposing additional constraints. For example, on 2D images, the pixels on the object boundaries can have very different depths. Semantic information can help regulate the pixels belonging to certain objects and facilitate the learning process of depth estimation. In this work, to utilize semantic information, we augment the input RGB images with additional semantic labels. We also experimented with augmenting RGB images with semantic labels during synthetic-to-real image translation and obtained substantial improvements in the quality of our translated images. We did not apply the semantic consistency loss defined in Section 3.1 while conducting these experiments. Results for this study are provided in Table 4.

4 Experiments

We first present the implementation details of our model, including the network implementation, data pre-processing, and our training and inference strategies. We test our model on the KITTI and Make3D benchmarks and compare our performance with other state-of-the-art models. Finally, we study the importance of each component in our model through various ablation experiments.

4.1 Implementation Details

4.1.1 Network Implementation

Our framework mainly consists of two main sub-modules: (i) the syn-to-real image translation network which translates synthetic images to real-style images, and (ii) the depth estimation network which predicts depth maps for both translated synthetic images and real videos. For the synthetic-to-real image translation networks, we build our network on the Net architecture [46], with added constraints that use the synthetic ground-truth labels for semantic and optical flow. For the semantic consistency loss we use DeepLab v3+ with a MobileNet backbone as our model. We pre-trained the model on vKITTI, achieving a median IoU of 0.898 on the validation set. We tested the U-Net, VGGNet, and ResNet50 architectures for the depth estimation network and selected the U-Net architecture due to its best performance. A VGG-based architecture was used to estimate the relative camera poses between sequential images. These depth maps and camera poses are subsequently used to compute the self-supervised loss.

4.1.2 Data Pre-processing

We use vKITTI [12] and KITTI [26] as the synthetic dataset and the real-world dataset, respectively while training the synthetic-to-real image translation network. The training dataset consists of 20470 images from vKITTI and 41740 images from KITTI. The training images of the KITTI dataset are further divided into small sequences. Images in each sequence are ordered so that they represent a short video clip. We use 697 images from KITTI as our test dataset as per the eigen split [10]. The input images are resized to (width height) during both training and testing. The ground-truth depth information, semantic labels, and optical flow information from the synthetic vKITTI dataset are also used during training. As discussed in [46], the maximum vKITTI ground-truth depth is 655.3m, whereas the maximum KITTI depth is about 80m. Thus, we clip the synthetic ground truth depths to the range of [0.01, 80] meters. For our real data, we require semantic labels in addition to the monocular video images from the KITTI dataset. We use the pre-trained DeepLab v3+ model [6] to generate semantic labels for real images.

4.1.3 Model Training

Training our model has two major challenges: (i) the training of GAN-based networks is known to be unstable; (ii) the depth estimation in our model consists of two components and thus the weights of the depth estimation network are updated by two separate loss functions, which can lead to a convergence issue. To tackle this problem, we design a two-phase training strategy. In the first phase, we pre-train the synthetic-to-real image translation network along with synthetic supervised depth estimation constraints to provide a stable initialization for the GAN-based image translation network. In the second phase, we freeze the weights of the image translation network and train the depth estimation network using both supervised and self-supervised losses. We primarily tested two training methods to harmonize the two sources of losses: 1) weighted sum training: updating the weights of the depth estimation network based on a weighted sum of the two sources of losses; 2) alternating training: alternatively updating the weights of the depth estimation networks by the two sources of losses. We find that the two training methods are resulting in comparable evaluation results but the alternating training provides another control knob to optimize the model training, and generalize well to both the data sources and therefore to unseen datasets. Due to space limitation, we show the results using alternating training only in this paper.

Further, we use the Adam optimizer [21], with initial learning rate of for the image translation network, for the depth estimation network, and for the camera pose estimation network.

Our network was trained on a RTX 2080Ti GPU and the training took 2.6 hours per epoch. On average, our depth estimation network can process 33 frames per second during inference.

4.2 Monocular Depth Estimation on KITTI Dataset

We follow the procedure defined in [46] when evaluating on the KITTI dataset. First, the ground truth depths are generated by projecting 3D LiDAR points to the image plane and then the depth predictions are clipped at a distance of 80m and 50m. The evaluation results are listed in Table 1, where all metrics are computed according to the evaluation strategy proposed in [10]. As shown in the table, our model has the best performance across all metrics. We believe this is because our model can synergize the merits of supervised depth estimation with domain adaptation and self-supervised depth estimation with real-world images. Typical supervised synthetic approaches train models by using low-cost synthetic ground-truth depth, but these approaches also suffer from unstable and inconsistent image translation, leading to less accurate translated images with low resolution. On the other hand, self-supervised approaches can learn the depth from high resolution sequential images; however, these depths are learned from indirect cues which are sensitive to in-view object movements, blockages, etc. Training the model with modified supervised and self-supervised constraints in our consolidated framework ensures that we exploit the best of both worlds, which ultimately leads to better prediction results.

Method Dataset Error-related metrics Accuracy-related metrics
Abs Rel Sq Rel RMSE RMSE log
depth capped at 80m
Zhou et al[49] K 0.183 1.595 6.709 0.270 0.734 0.902 0.959
Yin et al[44] K 0.155 1.296 5.857 0.233 0.793 0.931 0.973
Wang et al[37] K 0.151 1.257 5.583 0.228 0.810 0.936 0.974
Ramirez et al[28] K 0.143 2.161 6.526 0.222 0.850 0.939 0.972
Casser et al[5] K 0.141 1.026 5.290 0.215 0.816 0.945 0.979
Ranjan et al[29] K 0.140 1.070 5.326 0.217 0.826 0.941 0.975
Xu et al[43] K 0.138 1.016 5.352 0.216 0.823 0.943 0.976
Meng et al[25] K 0.133 0.905 5.181 0.208 0.825 0.947 0.981
Godard et al[16] 333

For fair comparison, we selected the results for the model without pre-training on the ImageNet dataset

K 0.132 1.044 5.142 0.210 0.845 0.948 0.977
Zheng et al[46] K + V 0.174 1.410 6.046 0.253 0.754 0.916 0.966
Mou et al[27] K + V 0.145 1.058 5.291 0.215 0.816 0.941 0.977
Ours K + V 0.124 0.826 4.981 0.200 0.846 0.955 0.982
depth capped at 50m
Yin et al[44] K 0.147 0.936 4.348 0.218 0.810 0.941 0.977
Zheng et al[46] K +V 0.168 1.199 4.674 0.243 0.772 0.912 0.966
Mou et al[27] K + V 0.139 0.814 3.995 0.203 0.830 0.949 0.980
Ours K + V 0.118 0.615 3.710 0.187 0.862 0.962 0.984
Table 1: Monocular depth estimation on KITTI dataset with Eigen et al[10] split. The highlighted scores mark the best performance among selected models. In “Dataset” column, “K” and “V” stands for the KITTI and the vKITTI dataset, respectively.

In Fig. 3 we compare qualitative depth estimation results of purely self-supervised GeoNet [44], purely synthetic supervised Net [46] and our proposed framework. Purely self-supervised approaches results in depth maps which are blurred and do not model depth discontinuity at object boundaries well. On the other hand purely synthetic supervised approach results in sharper depth maps but because of imperfect domain translation it fails to predict depth for surfaces with multiple textures. For example, in the first row of Fig 3, Net predicts incorrect depth values for the wall on the right because of the window on the wall adding additional texture. These defects severely limit the real-world application of purely self-supervised and synthetic supervised techniques. Our Net on the other hand generates sharper depth maps than GeoNet and doesn’t suffer from the problems discussed for Net depth, further proving our point about combining best features from both.

In Fig.4 we compare syn-to-real translated images for Net GAN and our semantic consistent GAN. Without the presence of a specific task loss, e.g. a depth estimation loss, the resulting objective drives the image translator to generate a realistic interpretation of synthetic images. However, when a task loss is introduced the main objective is shifted to project synthetic images to a space that is optimized for the task. Therefore, some of these differences might not be visually perceivable but can lead to a large gain in performance. Our approach significantly reduces artifacts and successfully retains the semantic structure across synthetic and translated images.

Figure 3: Qualitative Depth Prediction Results: Column (a) real-world images from KITTI, Column (b), (c), (d) are results for GeoNet [44], Net [46], and our Net framework, respectively.
Figure 4: Translated images: Column (a) input synthetic images; Column (b) Net; Column (c) our Net GAN with semantic constraints.

4.2.1 Camera Pose Estimation

We train and evaluate our model’s performance on the KITTI odometry dataset. Our training follows the same strategy as in Section 4.1.3 - the only difference is in the real dataset we use. We train on sequences [“00”, “01”, … , “08”], and test on sequences “09” and “10” as per the KITTI odometry split. We use a sequence length of 3 with the same architecture that was used in other experiments. Our evaluation follows the same strategy as given in [49]. As shown in Table 2, our model outperforms the state-of-the-art approaches by a convincing margin.

Method # of snippets Seq.09 Seq.10
ORB-SLAM (full) 5
ORB-SLAM (short) 5
DDVO (Wang et al[38]) 3
SfmLearner (Zhou et al[49]) 5
SfmLearner [49] updated 5
GeoNet (Yin et al[44]) 5
MonoDepth2* (Godard et al[16]) 2
EPC++ (Luo et al[23]) 3
Ours 3 0.0097 0.0046 0.0099 0.0071
Table 2: Absolute Trajectory Error (ATE) on the KITTI odometry dataset.

4.3 Generalization Study on Make3D Dataset

To show the generalization capability, we also test our framework on the Make3D dataset [35]. We use the model trained using the KITTI dateset and the vKITTI dataset, and evaluate the model on the Make3D test dataset, following the evaluation strategy in [17]. As shown in Table 3, our Net performs better than all other existing self-supervised monocular approaches.

Method Train Error-related metrics
Abs Rel Sq Rel RMSE RMSE log
MonoDepth (Godard et al[17]) No 0.544 10.940 11.760 0.193
SfmLearner (Zhou et al[49]) No 0.383 5.321 10.47 0.478
Net (Zheng et al[46]) No 0.508 6.589 8.935 0.574
MonoDepth2 (Godard et al[16]) No 0.322 3.589 7.417 0.163
TCDA (Mou et al[27]) No 0.384 3.885 7.645 0.181
Ours (no semantic augmentation) No 0.372 5.699 7.844 0.176
Ours (with semantic augmentation) No 0.322 3.238 7.187 0.164
Table 3: Error metrics for depth estimation on the Make3D dataset. “Train” refers to whether the model was trained on the Make3D train set.

4.4 Ablation Study

In this subsection, we perform a set of ablation experiments on the KITTI dataset to discuss how each individual component in our framework contributes to the final performance. The evaluation results are reported in Table 4.

4.4.1 Synthetic Translated Supervised Depth Estimation

Due to a large domain gap between the synthetic and the real domain, a model that is only trained on synthetic data typically generates unacceptable depth predictions when tested on real data. Synthetic-to-real image translation is one of the most effective remedies for this issue. Even with a native image translation network as proposed in [46], the depth predictions on the real data can be improved by about 40%. Additionally, the flow-guided photometric consistency and semantic consistency constraint further regulates the image translation and improves our depth prediction accuracy by another 8.4% and 8% respectively. Continuing training gives better performance compared to freezing network parameters. This is because is trained on synthetic semantic labels and cannot generalize to translated domain if not trained further.

4.4.2 Synthetic Translated Supervised + Semantic Augmentation

We investigate the importance of semantic augmentation to our model by (i) only augmenting the input images for the depth estimation network, but keeping RGB images for the image translation network, and (ii) augmenting the input images for both the depth estimation network and the image translation networks. Compared with the first augmentation strategy, the second strategy introduces a larger improvement. We believe it is because the semantic information can impose additional constraints on object geometry and these constraints are useful for regulating the shape of objects and determining the depth prediction on the boundary of objects. Therefore, applying semantic augmentation for the image translation network and the depth prediction network is expected to further boost the depth prediction accuracy.

4.4.3 Synthetic Translated Supervised + Real Self-Supervised

By adding photometric losses to real-world sequential images and jointly training the synthetic supervised and self-supervised depth estimation, the depth estimation accuracy on real-world dataset is further improved by 11%. The real sequential images are typically clear and accurate, which compensates for the shortcomings in the imperfect translated images. However, the photometric losses for real-world sequential images are computed based on an assumption that the displacement of pixels is purely caused by movement of the camera. Such an assumption does not always hold. The direct supervision on the synthetic translated images can help alleviate the negative effect of violating this assumption. But, our study indicates that by selecting valid pixels and filtering out the pixels that violates the assumption, our model is further improved by a noticeable margin. Finally, augmenting the input images with semantic labels yields even further improvement.

Method Error-related metrics Accuracy-related metrics
Abs Rel Sq Rel RMSE RMSE log
Synthetic Translated Supervised
Without synthetic-to-real image translation 0.278 3.216 6.268 0.322 0.681 0.854 0.929
Native synthetic-to-real image translation 0.168 1.199 4.674 0.243 0.772 0.912 0.966
With flow-guided photometric consistency 0.1539 0.993 4.4492 0.2241 0.7986 0.9356 0.9752
With semantic consistency (frozen ) 0.1555 0.9680 4.7412 0.2324 0.7773 0.9245 0.9721
With semantic consistency 0.1544 0.9633 4.7422 0.2322 0.7786 0.9241 0.9727
Synthetic Translated Supervised with Semantic Augmentation
Semantic augmentation for depth estimation network input only 0.1532 0.9631 4.3872 0.2275 0.7945 0.9325 0.9738
Semantic augmentation for both the image translation networks & the depth estimation networks 0.1455 0.8869 4.2177 0.2154 0.8133 0.9411 0.9773
Synthetic Translated Supervised + Real-world Self-Supervised
With self-supervised depth estimation 0.1292 0.6969 3.8399 0.1964 0.8428 0.9554 0.9826
With auto-masking 0.1198 0.6671 3.7696 0.1921 0.8637 0.9583 0.9819
With semantic augmentation 0.1183 0.6150 3.7105 0.1876 0.8620 0.9622 0.9844
Table 4: Performance gain in depth estimation from different model components. The predicted depth is capped at 50m

5 Conclusion and Next Steps

In this paper, we present a framework for monocular depth estimation which combines the features of both synthetic images and real video frames in a novel semantic-aware, self-supervised setting. The complexity of our model does not affect its scalability, as we only require a depth network during inference time. We outperform all existing approaches on the KITTI benchmark as well as on our generalization to the new Make3D dataset. These factors contribute to the increased accuracy, scalability, and robustness of our framework as compared to other existing approaches. Our framework extends typical dataset-specific models to improve generalization performance, making it more relevant for real world applications. In the future, we plan to explore strategies which can apply similar frameworks to other related tasks in visual perception.


  • [1] A. Atapour-Abarghouei and T. P. Breckon (2018) Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 2800–2810. Cited by: §2.1.
  • [2] J. Bian, Z. Li, N. Wang, H. Zhan, C. Shen, M. Cheng, and I. Reid (2019) Unsupervised scale-consistent depth and ego-motion learning from monocular video. In Advances in Neural Information Processing Systems, pp. 35–45. Cited by: §2.2.
  • [3] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3722–3731. Cited by: §2.1.
  • [4] Y. Cao, Z. Wu, and C. Shen (2017) Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Transactions on Circuits and Systems for Video Technology 28 (11), pp. 3174–3182. Cited by: §2.1.
  • [5] V. Casser, S. Pirk, R. Mahjourian, and A. Angelova (2018)

    Depth prediction without the sensors: leveraging structure for unsupervised learning from monocular videos

    External Links: 1811.06152 Cited by: §2.2, Table 1.
  • [6] L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, Cited by: §4.1.2.
  • [7] P. Chen, A. H. Liu, Y. Liu, and Y. F. Wang (2019) Towards scene understanding: unsupervised monocular depth estimation with semantic-aware representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2624–2632. Cited by: §2.2.
  • [8] W. Chen, Z. Fu, D. Yang, and J. Deng (2016) Single-image depth perception in the wild. In Advances in Neural Information Processing Systems, pp. 730–738. Cited by: §2.1.
  • [9] A. Cherian and A. Sullivan (2018)

    Sem-gan: semantically-consistent image-to-image translation

    CoRR abs/1807.04409. External Links: Link, 1807.04409 Cited by: §2.2.
  • [10] D. Eigen, C. Puhrsch, and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pp. 2366–2374. Cited by: §2.1, §4.1.2, §4.2, Table 1.
  • [11] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao (2018-06) Deep ordinal regression network for monocular depth estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [12] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig (2016) Virtual worlds as proxy for multi-object tracking analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4340–4349. Cited by: §4.1.2.
  • [13] Y. Ganin and V. Lempitsky (2014)

    Unsupervised domain adaptation by backpropagation

    arXiv preprint arXiv:1409.7495. Cited by: §2.1.
  • [14] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky (2016)

    Domain-adversarial training of neural networks


    The Journal of Machine Learning Research

    17 (1), pp. 2096–2030.
    Cited by: §2.1.
  • [15] R. Garg, G. VijayKumarB., and I. D. Reid (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In European Conference on Computer Vision (ECCV), Cited by: §2.2.
  • [16] C. Godard, O. M. Aodha, M. Firman, and G. J. Brostow (2019) Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3828–3838. Cited by: §1, §1, §2.2, §2.2, §3.2.3, Table 1, Table 2, Table 3.
  • [17] C. Godard, O. Mac Aodha, and G. J. Brostow (2017) Unsupervised monocular depth estimation with left-right consistency. In CVPR, Cited by: §2.2, §4.3, Table 3.
  • [18] A. Gordon, H. Li, R. Jonschkowski, and A. Angelova (2019) Depth from videos in the wild: unsupervised monocular depth learning from unknown cameras. External Links: 1904.04998 Cited by: §2.2.
  • [19] L. He, G. Wang, and Z. Hu (2018) Learning depth from single images with deep neural network embedding focal length. IEEE Transactions on Image Processing 27 (9), pp. 4676–4689. Cited by: §2.1.
  • [20] J. Y. Jason, A. W. Harley, and K. G. Derpanis (2016) Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In European Conference on Computer Vision (ECCV), pp. 3–10. Cited by: §2.2.
  • [21] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.3.
  • [22] F. Liu, C. Shen, G. Lin, and I. Reid (2016) Learning depth from single monocular images using deep convolutional neural fields. IEEE transactions on pattern analysis and machine intelligence 38 (10), pp. 2024–2039. Cited by: §2.1.
  • [23] C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, and A. Yuille (2018) Every pixel counts++: joint learning of geometry and motion with 3d holistic understanding. arXiv preprint arXiv:1810.06125. Cited by: Table 2.
  • [24] R. Mahjourian, M. Wicke, and A. Angelova (2018-06) Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [25] Y. Meng, Y. Lu, A. Raj, S. Sunarjo, R. Guo, T. Javidi, G. Bansal, and D. Bharadia (2019) SIGNet: semantic instance aided unsupervised 3d geometry perception. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9810–9820. Cited by: §2.2, Table 1.
  • [26] M. Menze and A. Geiger (2015) Object scene flow for autonomous vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3061–3070. Cited by: §1, §4.1.2.
  • [27] Y. Mou, M. Gong, H. Fu, K. Batmanghelich, K. Zhang, and D. Tao (2019) Learning depth from monocular videos using synthetic data: a temporally-consistent domain adaptation approach. arXiv preprint arXiv:1907.06882. Cited by: §1, §1, §2.2, Table 1, Table 3.
  • [28] P. Z. Ramirez, M. Poggi, F. Tosi, S. Mattoccia, and L. D. Stefano (2018) Geometry meets semantics for semi-supervised monocular depth estimation. External Links: 1810.04093 Cited by: §2.2, Table 1.
  • [29] A. Ranjan, V. Jampani, L. Balles, K. Kim, D. Sun, J. Wulff, and M. J. Black (2018) Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. External Links: 1805.09806 Cited by: §2.2, Table 1.
  • [30] Z. Ren, J. Yan, B. Ni, B. Liu, X. Yang, and H. Zha (2017)

    Unsupervised deep learning for optical flow estimation

    In AAAI 3. Cited by: §2.2.
  • [31] V. K. Repala and S. R. Dubey (2018) Dual cnn models for unsupervised monocular depth estimation. arXiv preprint arXiv:1804.06324. Cited by: §2.1.
  • [32] A. Roy and S. Todorovic (2016) Monocular depth estimation using neural regression forest. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5506–5514. Cited by: §2.1.
  • [33] S. Sankaranarayanan, Y. Balaji, C. D. Castillo, and R. Chellappa (2018) Generate to adapt: aligning domains using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8503–8512. Cited by: §2.1.
  • [34] S. Sankaranarayanan, Y. Balaji, A. Jain, S. Nam Lim, and R. Chellappa (2018-06) Learning from synthetic data: addressing domain shift for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [35] A. Saxena, M. Sun, and A. Y. Ng (2008) Make3d: learning 3d scene structure from a single still image. IEEE transactions on pattern analysis and machine intelligence 31 (5), pp. 824–840. Cited by: §1, §4.3.
  • [36] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki (2017) Sfm-net: learning of structure and motion from video.. Note: preprint External Links: 1704.07804 Cited by: §2.2.
  • [37] C. Wang, J. Miguel Buenaposada, R. Zhu, and S. Lucey (2018-06) Learning depth from monocular videos using direct methods. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 1.
  • [38] C. Wang, J. Miguel Buenaposada, R. Zhu, and S. Lucey (2018) Learning depth from monocular videos using direct methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2022–2030. Cited by: Table 2.
  • [39] K. Xian, C. Shen, Z. Cao, H. Lu, Y. Xiao, R. Li, and Z. Luo (2018-06) Monocular relative depth perception with web stereo data supervision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [40] D. Xu, E. Ricci, W. Ouyang, X. Wang, and N. Sebe (2017-07) Multiscale continuous crfs as sequential deep networks for monocular depth estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [41] D. Xu, W. Wang, H. Tang, H. Liu, N. Sebe, and E. Ricci (2018) Structured attention guided convolutional neural fields for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3917–3925. Cited by: §2.1.
  • [42] D. Xu, W. Wang, H. Tang, H. Liu, N. Sebe, and E. Ricci.Structured (2018-06) Attention guided convolutional neural fields for monocular depth estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [43] H. Xu, J. Zheng, J. Cai, and J. Zhang (2019) Region deformer networks for unsupervised depth estimation from unconstrained monocular videos. External Links: 1902.09907 Cited by: §2.2, Table 1.
  • [44] Z. Yin and J. Shi (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992. Cited by: §1, §2.2, §2.2, §3.1.4, Figure 3, §4.2, Table 1, Table 2.
  • [45] Z. Zhang, A. G. Schwing, S. Fidler, and R. Urtasun (2015) Monocular object instance segmentation and depth ordering with cnns.. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2614–2622. Cited by: §2.1.
  • [46] C. Zheng, T. Cham, and J. Cai (2018) T2net: synthetic-to-realistic translation for solving single-image depth estimation tasks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 767–783. Cited by: §1, §3.1, Figure 3, §4.1.1, §4.1.2, §4.2, §4.2, §4.4.1, Table 1, Table 3.
  • [47] C. Zheng, T. Cham, and J. Cai (2018) T2net: synthetic-to-realistic translation for solving single-image depth estimation tasks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 767–783. Cited by: §2.1, §2.2.
  • [48] J. Zhou, Y. Wang, K. Qin, and W. Zeng (2019) Unsupervised high-resolution depth learning from videos with dual networks. External Links: 1910.08897 Cited by: §2.2.
  • [49] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), I. Conference (Ed.), Cited by: §2.2, §4.2.1, Table 1, Table 2, Table 3.
  • [50] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In CVPR, Cited by: §2.2.
  • [51] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2.1.