Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue

06/04/2020 ∙ by Jia-Wang Bian, et al. ∙ 0

Single-view depth estimation using CNNs trained from unlabelled videos has shown significant promise. However, the excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices, in which case the ego-motion is often degenerate, i.e., the rotation dominates the translation. In this work, we establish that the degenerate camera motions exhibited in handheld settings are a critical obstacle for unsupervised depth learning. A main contribution of our work is fundamental analysis which shows that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To capitalise on our findings, we propose a novel data pre-processing method for effective training, i.e., we search for image pairs with modest translation and remove their rotation via the proposed weak image rectification. With our pre-processing, existing unsupervised models can be trained well in challenging scenarios (e.g., NYUv2 dataset), and the results outperform the unsupervised SOTA by a large margin (0.147 vs. 0.189 in the AbsRel error).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 15

page 16

Code Repositories

Unsupervised-Indoor-Depth

Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Inferring 3D geometry from 2D images is a long-standing problem in robotics and computer vision. Depending on the specific use case, it is usually solved by Structure-from-Motion 

Schonberger and Frahm (2016) or Visual SLAM Davison et al. (2007); Newcombe et al. (2011); Mur-Artal et al. (2015). Underpinning these traditional pipelines is searching for correspondences Lowe (2004); Bian et al. (2020) across multiple images and triangulating them via epipolar geometry Zhang (1998); Hartley and Zisserman (2003); Bian et al. (2019b)

to obtain 3D points. Following the growth of deep learning-based approaches, Eigen et al. 

Eigen et al. (2014) show that the depth map can be inferred from a single color image by a CNN, which is trained with the ground-truth depth supervisions captured by range sensors. Subsequently a series of supervised methods Liu et al. (2016); Eigen and Fergus (2015); Chakrabarti et al. (2016); Laina et al. (2016); Li et al. (2017); Fu et al. (2018); Yin et al. (2019) have been proposed and the accuracy of estimated depth is progressively improved.

Based on epipolar geometry Zhang (1998); Hartley and Zisserman (2003); Bian et al. (2019b), learning depth without requiring the ground-truth supervision has been explored. Garg et al. Garg et al. (2016) show that the single-view depth CNN can be trained from stereo image pairs with known baseline via photometric loss. Zhou et al. Zhou et al. (2017) further explored the unsupervised framework and proposed to train the depth CNN from unlabelled videos. They additionally introduced a Pose CNN to estimate the relative camera pose between consecutive frames, and they still use photometric loss for supervision. Following that, a number of of unsupervised methods have been proposed, which can be categorised into stereo-based Godard et al. (2017); Zhan et al. (2018, 2019); Watson et al. (2019) and video-based Wang et al. (2018); Mahjourian et al. (2018); Yin and Shi (2018); Zou et al. (2018); Ranjan et al. (2019); Godard et al. (2019); Gordon et al. (2019); Chen et al. (2019); Zhou et al. (2019b); Bian et al. (2019a), according to the type of training data. Our work follows the latter paradigm, since unlabelled videos are easier to obtain in real-world scenarios.

Unsupervised methods have shown promising results in driving scenes, e.g., KITTI Geiger et al. (2013) and Cityscapes Cordts et al. (2016). However, as reported in Zhou et al. (2019a), they usually fail in generic scenarios such as the indoor scenes in NYUv2 dataset Silberman et al. (2012). For example, GeoNet Yin and Shi (2018), which achieves state-of-the-art performance in KITTI, is unable to obtain reasonable results in NYUv2. To this end, Zhou et al. (2019a) proposes to use optical flow as the supervision signal to train the depth CNN, and very recent Zhao et al. (2020) uses optical flow for estimating ego-motion to replace the Pose CNN. However, the reported depth accuracy Zhao et al. (2020) is still limited, i.e., 0.189 in terms of AbsRel—see also qualitative results in Fig. 3.

Our work investigates the fundamental reasons behind poor results of unsupervised depth learning in indoor scenes. In addition to the usual challenges such as non-Lambertian surfaces and low-texture scenes, we identify the camera motion profile in the training videos as a critical factor that affects the training process. To develop this insight, we conduct an in-depth analysis of the effects of camera pose to current unsupervised depth learning framework. Our analysis shows that (i) fundamentally the camera rotation behaves as noise to training, while the translation contributes effective gradients; (ii) the rotation component dominates the translation component in indoor videos captured using handheld cameras, while the opposite is true in autonomous driving scenarios.

To capitalise on our findings, we propose a novel data pre-processing method for unsupervised depth learning. Our analysis (described in Sec. 2.3) indicates that image pairs with small relative camera rotation and moderate translation should be favoured. Therefore, we search for image pairs that fall into our defined translation range, and we weakly rectify the selected pairs to remove their relative rotation. Note that the processing requires no ground truth depth and camera pose. With our proposed data pre-processing, we demonstrate that existing state-of-the-art (SOTA) unsupervised methods can be trained well in the challenging indoor NYUv2 dataset Silberman et al. (2012). The results outperform the unsupervised SOTA Zhao et al. (2020) by a large margin (0.147 vs. 0.189 in the AbsRel error).

To summarize, our main contributions are three-fold:

  • We theoretically analyze the effects of camera motion on current unsupervised frameworks for depth learning, and reveal that the camera rotation behaves as noise for training depth CNNs, while the translation contributes effective supervisions.

  • We calculate the distribution of camera motions in different scenarios, which, along with the analysis above, helps to answer the question why it is challenging to train unsupervised depth CNNs from indoor videos captured using handheld cameras.

  • We propose a novel method to select and weakly rectify image pairs for better training. It enables existing unsupervised methods to show competitive results with many supervised methods in the challenging NYUv2 dataset.

2 Analysis

We first overview the unsupervised framework for depth learning. Then, we revisit the depth and camera pose based image warping and demonstrate the relationship between camera motion and depth network training. Finally, we compare the statistics of camera motions in different datasets to verify the impact of camera motion on depth learning.

2.1 Overview of video-based unsupervised depth learning framework

Following SfMLearner Zhou et al. (2017), plenty of video-based unsupervised frameworks for depth estimation have been proposed. SC-SfMLearner Bian et al. (2019a), which is the current SOTA framework, additionally constrains the geometry consistency over Zhou et al. (2017), leading to more accurate and scale-consistent results. In this paper, we use SC-SfMLearner as our framework, and overview its pipeline in Fig. 1.

Forward.

A training image pair (, ) is first passed into a weight-shared depth CNN to obtain the depth maps (, ), respectively. Then, the pose CNN takes the concatenation of two images as input and predicts their 6D relative camera pose . With the predicted depth and pose , the warping flow between two images is generated according to Sec. 2.2.

Loss.

First, the main supervision signal is the photometric loss . It calculates the color difference in each pixel between with its warped position on using a

differentiable bilinear interpolation

 Jaderberg et al. (2015). Second, depth maps are regularized by the geometric inconsistency loss , where it enforces the consistency of predicted depths between different frames. Besides, a weighting mask is derived from to handle dynamics and occlusions, which is applied on to obtain the weighted . Third, depth maps are also regularized by a smoothness loss , which ensures that depth smoothness is guided by the edge of images. Overall, the objective function is:

(1)

where , , and are hyper-parameters to balance different losses.

Figure 1: Overview of SC-SfMLearner Bian et al. (2019a). Firstly, in the forward pass, training images (, ) are passed into the network to predict depth maps (, ) and relative camera pose . With and , we obtain the warping flow between two views according to Eqn. 2. Secondly, given the warping flow, the photometric loss and the geometry consistency loss are computed. Also, the weighting mask is derived from and applied over to handle dynamics and occlusions. Moreover, an edge-aware smoothness loss is used to regularize the predicted depth map. See Bian et al. (2019a) for more details.

2.2 Depth and camera pose based image warping

The image warping builds the link between networks and losses during training, i.e., the warping flow is generated by network predictions (depth and camera motion) in forward pass, and the gradients are back-propagated from the losses via the warping flow to networks. Therefore, we investigate the warping to analyze the camera pose effects on the depth learning, which avoids involving image content factors, such as illumination changes and low-texture scenes.

Full transformation.

The camera pose is composed of rotation and translation components. For one point (, ) in the first image that is warped to (, ) in the second image. It satisfies:

(2)

where is the depth of this point in two images and is the 3x3 camera intrinsic matrix. is a 3x3 rotation matrix and

is a 3x1 translation vector. We decompose the full warping flow and discuss each component below.

Pure-rotation transformation.

If two images are related by a pure-rotation transformation (i.e., ), based on Eqn. 2, the warping satisfies:

(3)

where is as known as the homography matrix  Hartley and Zisserman (2003), and we have

(4)

where , standing for the depth relation between two views, is determined by the third row of the above equation, i.e., . It indicates that we can obtain (, ) without . Specifically, solving the above equation, we have

(5)

This demonstrates that the rotational flow in image warping is independent to the depth, and it is only determined by and . Consequently, the rotational motion in image pairs cannot contribute effective gradients to supervise the depth CNN during training, even when it is correctly estimated. More importantly, if the estimated rotation is inaccurate111Related work Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a) shows that the Pose CNN enables more accurate translation estimation than ORB-SALM Mur-Artal et al. (2015), but its predicted rotation is much worse than the latter, as demonstrated in Bian et al. (2019a); Zhan et al. (2020).

, noisy gradients will arise and harm the depth CNN in backpropagation. Therefore, we conclude that the rotational motion behaves as the noise to unsupervised depth learning.

Pure-translation transformation.

A pure-translation transformation means that

is an identity matrix in Eqn. 

2. Then we have

(6)

where ( ) are camera focal lengths, and (, ) are principal point offsets. Solving the above equation, we have

(7)

It shows that the translation vector is coupled with the depth during the warping from (, ) to (, ). This builds the link between the depth CNN and the warping, so that gradients from the photometric loss can flow to the depth CNN via the warping. Therefore, we conclude that the translational motion provides effective supervision signals to depth learning.

2.3 Distribution of decomposed camera motions in different scenarios

Inter-frame camera motions and warping flows.

Fig. 2(a) shows the camera motion statistics on KITTI Geiger et al. (2013) and NYUv2 Silberman et al. (2012) datasets. KITTI is pre-processed by removing static images, as in done Zhou et al. (2017); Bian et al. (2019a). We pick one image of every 10 frames in NYUv2, which is denoted as Original NYUv2. Then we apply the proposed pre-processing (Sec. 3) to obtain Rectified NYUv2. For all datasets, we compare the decomposed camera pose of their training image pairs w.r.t. the absolute magnitude and inter-frame warping flow222We first compute the rotational flow using Eqn. 5, and then we obtain the translational flow by subtracting the rotational flow from the overall warping flow. Here, the translational flow is also called residual parallax in Li et al. (2020), where it is used to compute depth from correspondences and relative camera poses.. Specifically, we compute the averaged warping flow of randomly sample points in the first image using the ground-truth depth and pose. For each point (, ) that is warped to (, ), the flow magnitude is . Fig. 2(a) shows that the rotational flow dominates the translational flow in Original NYUv2 but it is opposite in KITTI. Along with the conclusion in Sec. 2.2 that the depth is supervised by the translation while the rotation behaves as the noise, this answers the question why unsupervised depth learning methods that obtain state-of-the-art results in driving scenes often fail in indoor videos. Besides, the results on Rectified NYUv2 demonstrate that our proposed data pre-processing can address this issue.

Warping error sensitivity to depth error.

Besides the above statistics, we investigate the relation between warping error and depth error. As the network is supervised via the warping, we expect the warping error (px) to be sensitive to depth errors. For investigation, we manually generate wrong depths for randomly sampled points and then analyze their warping errors in all datasets. Fig. 2(b) shows the results, which shows that the warping error in Original NYUv2 is about times smaller than that in KITTI when the sampled points have the same relative error. This indicates another challenge in indoor videos against driving scenes. Indeed, the issue is due to the fact that the sensitivity will be significantly decreased when the camera translation is small. Formally, when is close to , based on Eqn. 7, we have:

(8)

This causes the warping error hard to separate accurate/inaccurate depth estimates, confusing the depth CNN. We address this issue by translation-based image pairing (see Sec. 3.1). The results on Rectified NYUv2 demonstrate that the efficacy of our proposed method.

KITTI Geiger et al. (2013) Original NYUv2 Silberman et al. (2012) Rectified NYUv2
R=, T= R=, T= R=, T=
(a) Inter-frame camera motion and warping flows
(b) Warping error with depth error
Figure 2: Camera motion statistics (a) and warping error sensitivity investigation (b). "Rectified" stands for the proposed pre-processing described in Sec. 3. In (a), the first row shows the averaged magnitude of camera poses, i.e., R for rotation and T for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis.

3 Proposed data processing

The above analysis suggests that unsupervised depth learning frameworks favour image pairs those have small rotational and sufficient translational motions for training. However, unlike driving sequences, videos captured by handheld cameras tend to have more rotational while less translational motions, as shown in Fig. 2. In this section, we describe the proposed method to select image pairs with appropriate translation in Sec. 3.1, and reduce the rotation of selected pairs in Sec. 3.2.

3.1 Translation-based image pairing

For high frame rate videos (e.g., fps in NYUv2 Silberman et al. (2012)), we first downsample the raw videos temporally to remove redundant images, i.e., extract one key frame from every frames. Here, is used in NYUv2. The resulting data is denoted as the Original NYUv2 in all experiments. Then, instead of only considering adjacent frames as a pair, we pair up each image with its following frames We also let in NYUv2 Silberman et al. (2012). For each image pair candidate, we compute the relative camera pose by searching for feature correspondences and using the epipolar geometry Hartley and Zisserman (2003); Bian et al. (2019b). As the estimated pose is up-to-scale Hartley and Zisserman (2003), we use the translational flow (i.e., as the same as in Fig. 2(a)) instead of absolute translation distance for pairing. No ground-truth data is required in the proposed method.

First, we generate correspondences by using SIFT Lowe (2004) features. Then we apply the ratio test Lowe (2004) and GMS Bian et al. (2019a) to find good ones. Second, with the selected correspondences, we estimate the essential matrix using the five-point algorithm Nistér (2004) within a RANSAC Fischler and Bolles (1981) framework, and then we recover the relative camera pose. Third, for each image pair candidate, we compute the averaged magnitude of translational flows overall all inlier correspondences, which is as the same as in Fig. 2(a). Based on the distribution of warping flows on KITTI that is a good example for us, we empirically set the expected range as pixels. The out-of-range pairs are removed.

Although running Struecture-from-Motion (e.g., COLMAP Schonberger and Frahm (2016)) or VSLAM (e.g., ORB-SLAM Mur-Artal et al. (2015)) to compute relative camera poses is also possible, we argue that it is overkill for our problem. More importantly, these pipelines are often brittle, especially when processing videos with pure rotational motions and low-texture contents Parra Bustos et al. (2019). Compared with them, our method does not require a 3D map, and hence avoiding issues such as incomplete reconstruction and tracking lost.

3.2 3-DoF weak rectification

In order to remove the rotational motion of selected pairs, we propose a weak rectification method. It warps two images to a common plane using the pre-computed rotation matrix . Specifically, (i) we fist convert to rotation vector using Rodrigues formula Trucco and Verri (1998) to obtain half rotation vectors for two images (i.e., and ), and then we convert them back to rotation matrices and . (ii) Given , , and camera intrinsic , we warp images to a new common image plane according to Eqn. 5. Then in the common plane, we crop their overlapped rectangular regions to obtain the weakly rectified pairs. See the Matlab pseudo code in the supplementary material.

Compared with the standard rectification Fusiello et al. (2000), our method only uses the rotation for image warping and deliberately ignores the translation , so our weakly rectified pairs have 3-DoF translational motions, while the rigorously rectified pairs have 1-DoF translational motions, i.e., corresponding points have identical vertical coordinates. The reason is that we have different input settings (i.e., temporal frames from arbitrary-motion videos vs. left and right images from two horizontal cameras) and different purposes (i.e., depth learning vs. stereo matching) with the latter.

On the one hand, due to the rigours 1-DoF requirement in stereo matching, the standard rectification Fusiello et al. (2000) suffers in forward-motion pairs, where the epipoles lie inside the image and cause heavy deformation, e.g., resulting in extremely large images Fusiello et al. (2000). Although polar rectification Pollefeys et al. (1999) can mitigate the issue to some extent, the results are still deformed. However, this issue is avoided in our 3-DoF weak rectification, as we do not constrain the translational motion. On the other hand, the rigorous 1-DoF rectification is indeed unnecessary for depth learning. For example, related methods Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a) work well in KITTI videos, where image pairs have 3-DoF translational motions, and the results are comparable to methods those training on KITTI stereo pairs Garg et al. (2016); Godard et al. (2017); Zhan et al. (2018). Moreover, these methods show that the Pose CNN predicted 3-DoF translation is quite accurate, which even outperforms ORB-SALM Mur-Artal et al. (2015) on short sequences (i.e., 5-frame segments).

Due to above reasons, we propose the 3-DoF weak rectification, which reduces the rectification requirement and more suits the unsupervised depth learning problem. In practice, we still let the Pose CNN to predict 6-DoF motions as all related works Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a), where we use the predicted 3-DoF rotational motion to compensate the rotation residuals (see Fig. 2) caused by the imperfect rectification, and use the predicted 3-DoF translational motion to help train the depth CNN.

4 Experiments

4.1 Method, dataset, and metrics

Method.

We use the updated SC-SfMLearner Bian et al. (2019a)

, publicly available on GitHub, as our unsupervised learning framework. Compared with the original version, it replaces the encoder of depth and pose CNNs with a ResNet-18 

He et al. (2016)

backbone to enable training from the Imagenet 

Deng et al. (2009) pre-trained model. Besides, to demonstrate that our proposed pre-processing is universal to different methods, we also experiment with Monodepth2 Godard et al. (2019) (ResNet-18 backbone) in ablation studies. For all methods, we use the default hyper-parameters, and train models for epochs.

NYUv2 depth dataset.

The NYUv2 depth dataset Silberman et al. (2012) is composed of indoor video sequences recorded by a handheld Kinect RGB-D camera at resolution. The dataset contains 464 scenes taken from three cities. We use the officially provided 654 densely labeled images for testing, and use the rest sequences (no overlap with testing scenes) for training () and validation (). The raw training sequences contain images. It is first downsampled times to remove redundant frames, and then processed by using our proposed method, resulting in total rectified image pairs. The images are resized to resolution for training.

Methods Supervision Error Accuracy
AbsRel Log10 RMS
Make3D Saxena et al. (2006) 0.349 - 1.214 0.447 0.745 0.897
Depth Transfer Karsch et al. (2014) 0.349 0.131 1.21 - - -
Liu et al. Liu et al. (2014) 0.335 0.127 1.06 - - -
Ladicky et al. Ladicky et al. (2014) - - - 0.542 0.829 0.941
Li et al. Li et al. (2015) 0.232 0.094 0.821 0.621 0.886 0.968
Roy et al. Roy and Todorovic (2016) 0.187 0.078 0.744 - - -
Liu et al. Liu et al. (2016) 0.213 0.087 0.759 0.650 0.906 0.976
Wang et al. Wang et al. (2015) 0.220 0.094 0.745 0.605 0.890 0.970
Eigen et al. Eigen and Fergus (2015) 0.158 - 0.641 0.769 0.950 0.988
Chakrabarti et al. Chakrabarti et al. (2016) 0.149 - 0.620 0.806 0.958 0.987
Laina et al. Laina et al. (2016) 0.127 0.055 0.573 0.811 0.953 0.988
Li et al. Li et al. (2017) 0.143 0.063 0.635 0.788 0.958 0.991
DORN Fu et al. (2018) 0.115 0.051 0.509 0.828 0.965 0.992
VNL Yin et al. (2019) 0.108 0.048 0.416 0.875 0.976 0.994
Zhou et al. Zhou et al. (2019a) 0.208 0.086 0.712 0.674 0.900 0.968
Zhao et al. Zhao et al. (2020) 0.189 0.079 0.686 0.701 0.912 0.978
Ours 0.147 0.062 0.536 0.804 0.950 0.986
Table 1: Single-view depth estimation results on NYUv2 Silberman et al. (2012). As reported in Zhou et al. (2019a), unsupervised methods like GeoNet Yin and Shi (2018) fail to show reasonable results in this challenging dataset.
Learning Training ImageNet Error Accuracy
Framework Data Pretraining AbsRel Log10 RMS
SC-SfMLearner Bian et al. (2019a) Original 0.188 0.079 0.666 0.712 0.918 0.973
Original 0.166 0.071 0.621 0.755 0.934 0.981
Rectified 0.170 0.072 0.603 0.752 0.930 0.980
Rectified 0.147 0.062 0.536 0.804 0.950 0.986
Monodepth2 Godard et al. (2019) Original 0.213 0.088 0.713 0.662 0.902 0.972
Original 0.182 0.076 0.642 0.721 0.934 0.982
Rectified 0.181 0.075 0.637 0.741 0.926 0.976
Rectified 0.157 0.066 0.567 0.783 0.944 0.984
Table 2: Ablation studies on NYUv2 Silberman et al. (2012). Rectified stands for the proposed data processing. Note that Monodepth2 Godard et al. (2019) models often collapse when training from scratch, especially on original data. Here, we report the results for their successful case.
Scenes Training pairs Before Fine-tuning After Fine-tuning
AbsRel Acc () AbsRel Acc ()
Chess 2.6k 0.169 0.719 0.103 0.880
Fire 1.5k 0.158 0.758 0.089 0.916
Heads 0.5k 0.162 0.749 0.124 0.862
Office 3.1k 0.132 0.833 0.096 0.912
Pumpkin 2.3k 0.117 0.857 0.083 0.946
RedKitchen 4.9k 0.151 0.780 0.101 0.896
Stairs 1.6k 0.162 0.765 0.106 0.855
Table 3: Single-view depth estimation results on 7 Scenes Shotton et al. (2013). The model is pre-trained on NYUv2 Silberman et al. (2012), and on each scene, we fine-tune models for three epochs. As the training data is limited, the fine-tuning consumes less than minutes.

RGB-D 7 Scenes dataset.

The dataset Shotton et al. (2013) contains 7 scenes, and each scene contains several video sequences (- frames per sequence), which are captured by a Kinect camera at resolution. We follow the official train/test split for each scene. For training, we use the proposed pre-processing, and for testing, we simply extract one image from every frames. We first pre-train the model on NYUv2 dataset, and then fine-tune the model on this dataset to demonstrate the universality of the proposed method.

Evaluation metrics.

We follow previous methods Liu et al. (2016); Laina et al. (2016); Fu et al. (2018); Yin et al. (2019) to evaluate depth estimators. Specifically, we use the mean absolute relative error (AbsRel), mean log10 error (Log10), root mean squared error (RMS), and the accuracy under threshold ( < , ). As unsupervised methods cannot recover the absolute scale, we multiply the predicted depth maps by a scalar that matches the median with the ground truth, as done in Zhou et al. (2017); Bian et al. (2019a); Godard et al. (2019).

Figure 3: Qualitative comparison of single-view depth estimation on NYUv2 Silberman et al. (2012). More results are attached in the supplementary material.

4.2 Results

Comparing with the state-of-the-art (SOTA) methods.

Tab. 1 shows the results on NYUv2 Silberman et al. (2012). It shows that our method outperforms previous unsupervised SOTA method Zhao et al. (2020) by a large margin. Fig. 3 shows the qualitative depth results. Note that NYUv2 dataset is so challenging that previous unsupervised methods such as GeoNet Yin and Shi (2018) is unable to get reasonable results, as reported in Zhou et al. (2019a). Besides, our method also outperforms a series of fully supervised methods Liu et al. (2016); Saxena et al. (2006); Karsch et al. (2014); Liu et al. (2014); Ladicky et al. (2014); Li et al. (2015); Roy and Todorovic (2016); Wang et al. (2015); Eigen and Fergus (2015); Chakrabarti et al. (2016). However, it still has a gap between the SOTA supervised approach Yin et al. (2019).

Ablation studies.

Tab. 2 summarizes the results. First, for both SC-SfMLearner Bian et al. (2019a) and Monodepth2 Godard et al. (2019), training on our rectified data leads to significantly better results than on original data. It also demonstrates that the proposed pre-processing is independent to method chosen. Besides, note that the training is easy to collapse in original data, especially when starting from scratch. We here report the results for their successful case.

Generalization.

Tab. 3 shows the depth estimation results on 7 Scenes dataset Shotton et al. (2013). It shows that our model can generalize to previously unseen data, and fine-tuning on a few new data can boost the performance significantly. This has huge potentials to real-world applications, e.g., we can quickly adapt our pre-trained model to a new scene.

Timing.

It takes () hours to train SC-SfMLearner Bian et al. (2019a) models for epochs on rectified (original) data, measured in a single 16GB NVIDIA V100 GPU. Learning curves are provided in the supplementary material, which show that our pre-processing enables faster convergence. The inference speed of models is about fps on images in a NVIDIA RTX 2080 GPU.

5 Conclusion

In this paper, we investigate the degenerate motion in indoor videos, and theoretically analyze its impact on the unsupervised monocular depth learning. We conclude that (i) rotational motion dominates translational motion in videos taken by handheld devices, and (ii) rotation behaves as noises while translation contributes effective signals to learning. Moreover, we propose a novel data pre-processing method, which searches for modestly translational pairs and remove their relative rotation for effective training. Comprehensive results in different datasets and learning frameworks demonstrate the efficacy of proposed method, and we establish a new unsupervised SOTA performance in challenging indoor NYUv2 dataset.

References

  • [1] J. Bian, Z. Li, N. Wang, H. Zhan, C. Shen, M. Cheng, and I. Reid (2019) Unsupervised scale-consistent depth and ego-motion learning from monocular video. In Neural Information Processing Systems (NeurIPS), Cited by: §1, Figure 1, §2.1, §2.3, §3.1, §3.2, §3.2, §4.1, §4.1, §4.2, §4.2, Table 2, §6, footnote 1.
  • [2] J. Bian, W. Lin, Y. Liu, L. Zhang, S. Yeung, M. Cheng, and I. Reid (2020) GMS: grid-based motion statistics for fast, ultra-robust feature correspondence. International Journal on Computer Vision (IJCV). External Links: Document Cited by: §1.
  • [3] J. Bian, Y. Wu, J. Zhao, Y. Liu, L. Zhang, M. Cheng, and I. Reid (2019) An evaluation of feature matchers for fundamental matrix estimation. In British Machine Vision Conference (BMVC), Cited by: §1, §1, §3.1.
  • [4] A. Chakrabarti, J. Shao, and G. Shakhnarovich (2016) Depth from a single image by harmonizing overcomplete local network predictions. In Neural Information Processing Systems (NeurIPS), Cited by: §1, §4.2, Table 1.
  • [5] Y. Chen, C. Schmid, and C. Sminchisescu (2019)

    Self-supervised learning with geometric constraints in monocular video: connecting flow, depth, and camera

    .
    In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [6] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §1.
  • [7] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse (2007) MonoSLAM: real-time single camera slam. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI). Cited by: §1.
  • [8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1, §7.
  • [9] D. Eigen and R. Fergus (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §4.2, Table 1.
  • [10] D. Eigen, C. Puhrsch, and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In Neural Information Processing Systems (NeurIPS), Cited by: §1, §6.
  • [11] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. Cited by: §3.1, §6.
  • [12] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao (2018) Deep ordinal regression network for monocular depth estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §4.1, Table 1.
  • [13] A. Fusiello, E. Trucco, and A. Verri (2000) A compact algorithm for rectification of stereo pairs. Machine Vision and Applications. Cited by: §3.2, §3.2.
  • [14] R. Garg, V. K. BG, G. Carneiro, and I. Reid (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In European Conference on Computer Vision (ECCV), Cited by: §1, §3.2.
  • [15] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets Robotics: the kitti dataset. International Journal of Robotics Research (IJRR). Cited by: §1, Figure 2, §2.3, §6.
  • [16] C. Godard, O. Mac Aodha, and G. J. Brostow (2017) Unsupervised monocular depth estimation with left-right consistency. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §3.2.
  • [17] C. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow (2019) Digging into self-supervised monocular depth prediction. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §3.2, §3.2, §4.1, §4.1, §4.2, Table 2, footnote 1.
  • [18] A. Gordon, H. Li, R. Jonschkowski, and A. Angelova (2019) Depth from videos in the wild: unsupervised monocular depth learning from unknown cameras. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [19] R. Hartley and A. Zisserman (2003) Multiple view geometry in computer vision. Cambridge university press. Cited by: §1, §1, §2.2, §3.1.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  • [21] M. Jaderberg, K. Simonyan, A. Zisserman, et al. (2015) Spatial transformer networks. In Neural Information Processing Systems (NeurIPS), Cited by: §2.1.
  • [22] K. Karsch, C. Liu, and S. B. Kang (2014) Depth transfer: depth extraction from video using non-parametric sampling. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI). Cited by: §4.2, Table 1.
  • [23] L. Ladicky, J. Shi, and M. Pollefeys (2014) Pulling things out of perspective. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
  • [24] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab (2016) Deeper depth prediction with fully convolutional residual networks. In International Conference on 3D vision (3DV), Cited by: §1, §4.1, Table 1.
  • [25] B. Li, C. Shen, Y. Dai, A. Van Den Hengel, and M. He (2015)

    Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs

    .
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
  • [26] J. Li, R. Klein, and A. Yao (2017) A two-streamed network for estimating fine-scaled depth maps from single rgb images. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1, Table 1.
  • [27] Z. Li, T. Dekel, F. Cole, R. L.K. Tucker, N. Snavely, C. Liu, and W. T. Freeman (2020) MannequinChallenge: learning the depths of moving people by watching frozen people.. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI). Cited by: footnote 2.
  • [28] F. Liu, C. Shen, G. Lin, and I. Reid (2016) Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI). Cited by: §1, §4.1, §4.2, Table 1.
  • [29] M. Liu, M. Salzmann, and X. He (2014) Discrete-continuous depth estimation from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
  • [30] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. International Journal on Computer Vision (IJCV). Cited by: §1, §3.1, §6.
  • [31] R. Mahjourian, M. Wicke, and A. Angelova (2018) Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [32] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos (2015) ORB-SLAM: a versatile and accurate monocular slam system. IEEE Transactions on Robotics (TRO). Cited by: §1, §3.1, §3.2, §6, footnote 1.
  • [33] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison (2011) DTAM: dense tracking and mapping in real-time. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [34] D. Nistér (2004) An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI). Cited by: §3.1.
  • [35] A. Parra Bustos, T. Chin, A. Eriksson, and I. Reid (2019) Visual slam: why bundle adjust?. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §3.1.
  • [36] M. Pollefeys, R. Koch, and L. Van Gool (1999) A simple and efficient rectification method for general motion. In IEEE International Conference on Computer Vision (ICCV), Cited by: §3.2.
  • [37] A. Ranjan, V. Jampani, K. Kim, D. Sun, J. Wulff, and M. J. Black (2019) Competitive Collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Cited by: §1, §3.2, §3.2, footnote 1.
  • [38] A. Roy and S. Todorovic (2016) Monocular depth estimation using neural regression forest. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
  • [39] A. Saxena, S. H. Chung, and A. Y. Ng (2006) Learning depth from single monocular images. In Neural Information Processing Systems (NeurIPS), Cited by: §4.2, Table 1.
  • [40] J. L. Schonberger and J. Frahm (2016) Structure-from-motion revisited. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §3.1.
  • [41] J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi, and A. Fitzgibbon (2013) Scene coordinate regression forests for camera relocalization in rgb-d images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1, §4.2, Table 3, Figure 5, Figure 6, §7, §7.
  • [42] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus (2012) Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision (ECCV), Cited by: §1, §1, Figure 2, §2.3, §3.1, Figure 3, §4.1, §4.2, Table 1, Table 2, Table 3, §6, Figure 4, Figure 5, §7, §7, §7.
  • [43] E. Trucco and A. Verri (1998) Introductory techniques for 3-d computer vision. Vol. 201, Prentice Hall Englewood Cliffs. Cited by: §3.2.
  • [44] C. Wang, J. Miguel Buenaposada, R. Zhu, and S. Lucey (2018) Learning depth from monocular videos using direct methods. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [45] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, and A. L. Yuille (2015) Towards unified depth and semantic prediction from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
  • [46] J. Watson, M. Firman, G. J. Brostow, and D. Turmukhambetov (2019) Self-supervised monocular depth hints. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [47] W. Yin, Y. Liu, C. Shen, and Y. Yan (2019) Enforcing geometric constraints of virtual normal for depth prediction. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §4.1, §4.2, Table 1.
  • [48] Z. Yin and J. Shi (2018) GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §3.2, §3.2, §4.2, Table 1, footnote 1.
  • [49] H. Zhan, R. Garg, C. Saroj Weerasekera, K. Li, H. Agarwal, and I. Reid (2018) Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §3.2.
  • [50] H. Zhan, C. S. Weerasekera, J. Bian, and I. Reid (2020) Visual odometry revisited: what should be learnt?. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: footnote 1.
  • [51] H. Zhan, C. S. Weerasekera, R. Garg, and I. Reid (2019) Self-supervised learning for single view depth and surface normal estimation. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §1.
  • [52] Z. Zhang (1998) Determining the epipolar geometry and its uncertainty: a review. International Journal on Computer Vision (IJCV). External Links: ISSN 1573-1405, Document Cited by: §1, §1.
  • [53] W. Zhao, S. Liu, Y. Shu, and Y. Liu (2020) Towards better generalization: joint depth-pose learning without posenet. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §4.2, Table 1.
  • [54] J. Zhou, Y. Wang, K. Qin, and W. Zeng (2019) Moving indoor: unsupervised video depth learning in challenging environments. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §4.2, Table 1.
  • [55] J. Zhou, Y. Wang, K. Qin, and W. Zeng (2019) Unsupervised high-resolution depth learning from videos with dual networks. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [56] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.1, §2.3, §3.2, §3.2, §4.1, §6, footnote 1.
  • [57] Y. Zou, Z. Luo, and J. Huang (2018) DF-Net: unsupervised joint learning of depth and flow using cross-task consistency. In European Conference on Computer Vision (ECCV), Cited by: §1.

6 Additional details

Experimental details in Fig. 2.

First, we follow [56, 1] to pre-process KITTI [15] dataset, where static frames that are manually labelled by Eigen et al. [10] are removed from the raw video. The images are resized to . The accurate ground truth depth and camera poses are provided by a Velodyne laser scanner and a GPS localization system. Second, as the NYUv2 [42] dataset does not provide the ground truth camera pose, we use the ORB-SLAM2 [32] (RGB-D mode with the ground truth depth) to compute the camera trajectory. The image resolution is . We down-sample the raw videos by picking first image of every 10 times. Third, we randomly select one long sequence from the dataset for analysis. Given a sequence, we randomly sample valid points (with a good depth range) per image and compute their projection magnitudes (Fig. 2(a)) and projection errors (Fig. 2(b)) using the ground truth. For box visualization, we randomly sample points that are collected from the entire sequence.

Implementation details in Sec. 3.

First, for computing the feature correspondence, we use the SIFT [30] implementation by VLFeat library. The default parameters are used. Second, we use the built-in function in Matlab library to compute the essential matrix and relative camera pose. The maximum RANSAC [11] iterations are , and the inlier threshold is . we use the following pseudo Matlab code to compute the weakly rectified images.

function [ImRect1, ImRect2] = WeakRectify(Im1, Im2, K, R)
    % Function takes two images and their camera parameters as input,
    % and it returns the rectified images.
    % Im1, Im2: two images
    % K: camera intrinsic
    % R: relative rotation matrix
    %
    % ImRect1, ImRect2: two rectified images

    % Make the two image planes coplanar, by rotating each half way
    [R1, R2] = computeHalfRotations(R);

    H1 = projective2d(K * R1 / K);
    H2 = projective2d(K * R2 / K);

    % Compute the common rectangular area of the transformed images
    imageSize = size(Im1);
    [xBounds, yBounds] = computeOutputBounds(imageSize, H1, H2);

    % Rectify images
    ImRect1 = transformImage(Im1, H1, xBounds, yBounds);
    ImRect2 = transformImage(Im2, H2, xBounds, yBounds);
end

function [Rl, Rr] = computeHalfRotations(R)
    % Conver rotation matrix to vector representation
    r = rotationMatrixToVector(R);

    % Compute right half-rotation
    Rr = rotationVectorToMatrix(r / -2);

    % Compute left half-rotation
    Rl = Rr’;
end

7 Additional results

Learning curves.

The following figure shows the validation loss when training on NYUv2 [42]. "Rectified" stands for the proposed pre-processing, and "pt" stands for pre-training on ImageNet [8]. It demonstrates that training on our rectified data leads to better results and faster convergence, compared with the original dataset.

Visualization of single-view depth estimation.


Figure 4: More qualitative comparison of single-view depth estimation on NYUv2 [42].

Fig. 4 shows more results on NYUv2 [42].

Visualization of rectification and fine-tuning effects.

Fig. 5 shows results. In NYUv2 [42], we train models on both original data and our rectified data. In 7 Scenes [41], we fine-tune the model that is pre-trained on NYUv2. The qualitative evaluation results demonstrate the efficacy and universality of our proposed pre-processing, and it demonstrates the generalization ability of pre-trained depth CNN in previously unseen scenes.

NYUv2

7 Scenes

Figure 5: Qualitative results for ablation studies. In NYUv2 [42], we train models on both original data and our rectified data. In 7 Scenes [41], we fine-tune the model that is pre-trained on NYUv2.

Visualization of depth and converted point cloud.

Fig. 6 shows the video screenshot. We predict depth using our trained model on one sequence (i.e., office) from 7 Scenes [41]. The top shows the textured point cloud generated by the predicted depth map (bottom right) and source image (bottom left). The full video is attached along with this manuscript.


Figure 6: Depth and point cloud visualization on 7 Scenes [41]. The top shows the textured point cloud generated by the predicted depth map (bottom right) with the source image (bottom left). The full video is also attached.