Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue
Single-view depth estimation using CNNs trained from unlabelled videos has shown significant promise. However, the excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices, in which case the ego-motion is often degenerate, i.e., the rotation dominates the translation. In this work, we establish that the degenerate camera motions exhibited in handheld settings are a critical obstacle for unsupervised depth learning. A main contribution of our work is fundamental analysis which shows that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To capitalise on our findings, we propose a novel data pre-processing method for effective training, i.e., we search for image pairs with modest translation and remove their rotation via the proposed weak image rectification. With our pre-processing, existing unsupervised models can be trained well in challenging scenarios (e.g., NYUv2 dataset), and the results outperform the unsupervised SOTA by a large margin (0.147 vs. 0.189 in the AbsRel error).READ FULL TEXT VIEW PDF
Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue
Inferring 3D geometry from 2D images is a long-standing problem in robotics and computer vision. Depending on the specific use case, it is usually solved by Structure-from-MotionSchonberger and Frahm (2016) or Visual SLAM Davison et al. (2007); Newcombe et al. (2011); Mur-Artal et al. (2015). Underpinning these traditional pipelines is searching for correspondences Lowe (2004); Bian et al. (2020) across multiple images and triangulating them via epipolar geometry Zhang (1998); Hartley and Zisserman (2003); Bian et al. (2019b)
to obtain 3D points. Following the growth of deep learning-based approaches, Eigen et al.Eigen et al. (2014) show that the depth map can be inferred from a single color image by a CNN, which is trained with the ground-truth depth supervisions captured by range sensors. Subsequently a series of supervised methods Liu et al. (2016); Eigen and Fergus (2015); Chakrabarti et al. (2016); Laina et al. (2016); Li et al. (2017); Fu et al. (2018); Yin et al. (2019) have been proposed and the accuracy of estimated depth is progressively improved.
Based on epipolar geometry Zhang (1998); Hartley and Zisserman (2003); Bian et al. (2019b), learning depth without requiring the ground-truth supervision has been explored. Garg et al. Garg et al. (2016) show that the single-view depth CNN can be trained from stereo image pairs with known baseline via photometric loss. Zhou et al. Zhou et al. (2017) further explored the unsupervised framework and proposed to train the depth CNN from unlabelled videos. They additionally introduced a Pose CNN to estimate the relative camera pose between consecutive frames, and they still use photometric loss for supervision. Following that, a number of of unsupervised methods have been proposed, which can be categorised into stereo-based Godard et al. (2017); Zhan et al. (2018, 2019); Watson et al. (2019) and video-based Wang et al. (2018); Mahjourian et al. (2018); Yin and Shi (2018); Zou et al. (2018); Ranjan et al. (2019); Godard et al. (2019); Gordon et al. (2019); Chen et al. (2019); Zhou et al. (2019b); Bian et al. (2019a), according to the type of training data. Our work follows the latter paradigm, since unlabelled videos are easier to obtain in real-world scenarios.
Unsupervised methods have shown promising results in driving scenes, e.g., KITTI Geiger et al. (2013) and Cityscapes Cordts et al. (2016). However, as reported in Zhou et al. (2019a), they usually fail in generic scenarios such as the indoor scenes in NYUv2 dataset Silberman et al. (2012). For example, GeoNet Yin and Shi (2018), which achieves state-of-the-art performance in KITTI, is unable to obtain reasonable results in NYUv2. To this end, Zhou et al. (2019a) proposes to use optical flow as the supervision signal to train the depth CNN, and very recent Zhao et al. (2020) uses optical flow for estimating ego-motion to replace the Pose CNN. However, the reported depth accuracy Zhao et al. (2020) is still limited, i.e., 0.189 in terms of AbsRel—see also qualitative results in Fig. 3.
Our work investigates the fundamental reasons behind poor results of unsupervised depth learning in indoor scenes. In addition to the usual challenges such as non-Lambertian surfaces and low-texture scenes, we identify the camera motion profile in the training videos as a critical factor that affects the training process. To develop this insight, we conduct an in-depth analysis of the effects of camera pose to current unsupervised depth learning framework. Our analysis shows that (i) fundamentally the camera rotation behaves as noise to training, while the translation contributes effective gradients; (ii) the rotation component dominates the translation component in indoor videos captured using handheld cameras, while the opposite is true in autonomous driving scenarios.
To capitalise on our findings, we propose a novel data pre-processing method for unsupervised depth learning. Our analysis (described in Sec. 2.3) indicates that image pairs with small relative camera rotation and moderate translation should be favoured. Therefore, we search for image pairs that fall into our defined translation range, and we weakly rectify the selected pairs to remove their relative rotation. Note that the processing requires no ground truth depth and camera pose. With our proposed data pre-processing, we demonstrate that existing state-of-the-art (SOTA) unsupervised methods can be trained well in the challenging indoor NYUv2 dataset Silberman et al. (2012). The results outperform the unsupervised SOTA Zhao et al. (2020) by a large margin (0.147 vs. 0.189 in the AbsRel error).
To summarize, our main contributions are three-fold:
We theoretically analyze the effects of camera motion on current unsupervised frameworks for depth learning, and reveal that the camera rotation behaves as noise for training depth CNNs, while the translation contributes effective supervisions.
We calculate the distribution of camera motions in different scenarios, which, along with the analysis above, helps to answer the question why it is challenging to train unsupervised depth CNNs from indoor videos captured using handheld cameras.
We propose a novel method to select and weakly rectify image pairs for better training. It enables existing unsupervised methods to show competitive results with many supervised methods in the challenging NYUv2 dataset.
We first overview the unsupervised framework for depth learning. Then, we revisit the depth and camera pose based image warping and demonstrate the relationship between camera motion and depth network training. Finally, we compare the statistics of camera motions in different datasets to verify the impact of camera motion on depth learning.
Following SfMLearner Zhou et al. (2017), plenty of video-based unsupervised frameworks for depth estimation have been proposed. SC-SfMLearner Bian et al. (2019a), which is the current SOTA framework, additionally constrains the geometry consistency over Zhou et al. (2017), leading to more accurate and scale-consistent results. In this paper, we use SC-SfMLearner as our framework, and overview its pipeline in Fig. 1.
A training image pair (, ) is first passed into a weight-shared depth CNN to obtain the depth maps (, ), respectively. Then, the pose CNN takes the concatenation of two images as input and predicts their 6D relative camera pose . With the predicted depth and pose , the warping flow between two images is generated according to Sec. 2.2.
First, the main supervision signal is the photometric loss .
It calculates the color difference in each pixel between with its warped position on using a differentiable bilinear interpolation
differentiable bilinear interpolationJaderberg et al. (2015). Second, depth maps are regularized by the geometric inconsistency loss , where it enforces the consistency of predicted depths between different frames. Besides, a weighting mask is derived from to handle dynamics and occlusions, which is applied on to obtain the weighted . Third, depth maps are also regularized by a smoothness loss , which ensures that depth smoothness is guided by the edge of images. Overall, the objective function is:
where , , and are hyper-parameters to balance different losses.
The image warping builds the link between networks and losses during training, i.e., the warping flow is generated by network predictions (depth and camera motion) in forward pass, and the gradients are back-propagated from the losses via the warping flow to networks. Therefore, we investigate the warping to analyze the camera pose effects on the depth learning, which avoids involving image content factors, such as illumination changes and low-texture scenes.
The camera pose is composed of rotation and translation components. For one point (, ) in the first image that is warped to (, ) in the second image. It satisfies:
where is the depth of this point in two images and is the 3x3 camera intrinsic matrix. is a 3x3 rotation matrix and
is a 3x1 translation vector. We decompose the full warping flow and discuss each component below.
If two images are related by a pure-rotation transformation (i.e., ), based on Eqn. 2, the warping satisfies:
where is as known as the homography matrix Hartley and Zisserman (2003), and we have
where , standing for the depth relation between two views, is determined by the third row of the above equation, i.e., . It indicates that we can obtain (, ) without . Specifically, solving the above equation, we have
This demonstrates that the rotational flow in image warping is independent to the depth, and it is only determined by and . Consequently, the rotational motion in image pairs cannot contribute effective gradients to supervise the depth CNN during training, even when it is correctly estimated. More importantly, if the estimated rotation is inaccurate111Related work Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a) shows that the Pose CNN enables more accurate translation estimation than ORB-SALM Mur-Artal et al. (2015), but its predicted rotation is much worse than the latter, as demonstrated in Bian et al. (2019a); Zhan et al. (2020).
, noisy gradients will arise and harm the depth CNN in backpropagation. Therefore, we conclude that the rotational motion behaves as the noise to unsupervised depth learning.
A pure-translation transformation means that
is an identity matrix in Eqn.2. Then we have
where ( ) are camera focal lengths, and (, ) are principal point offsets. Solving the above equation, we have
It shows that the translation vector is coupled with the depth during the warping from (, ) to (, ). This builds the link between the depth CNN and the warping, so that gradients from the photometric loss can flow to the depth CNN via the warping. Therefore, we conclude that the translational motion provides effective supervision signals to depth learning.
Fig. 2(a) shows the camera motion statistics on KITTI Geiger et al. (2013) and NYUv2 Silberman et al. (2012) datasets. KITTI is pre-processed by removing static images, as in done Zhou et al. (2017); Bian et al. (2019a). We pick one image of every 10 frames in NYUv2, which is denoted as Original NYUv2. Then we apply the proposed pre-processing (Sec. 3) to obtain Rectified NYUv2. For all datasets, we compare the decomposed camera pose of their training image pairs w.r.t. the absolute magnitude and inter-frame warping flow222We first compute the rotational flow using Eqn. 5, and then we obtain the translational flow by subtracting the rotational flow from the overall warping flow. Here, the translational flow is also called residual parallax in Li et al. (2020), where it is used to compute depth from correspondences and relative camera poses.. Specifically, we compute the averaged warping flow of randomly sample points in the first image using the ground-truth depth and pose. For each point (, ) that is warped to (, ), the flow magnitude is . Fig. 2(a) shows that the rotational flow dominates the translational flow in Original NYUv2 but it is opposite in KITTI. Along with the conclusion in Sec. 2.2 that the depth is supervised by the translation while the rotation behaves as the noise, this answers the question why unsupervised depth learning methods that obtain state-of-the-art results in driving scenes often fail in indoor videos. Besides, the results on Rectified NYUv2 demonstrate that our proposed data pre-processing can address this issue.
Besides the above statistics, we investigate the relation between warping error and depth error. As the network is supervised via the warping, we expect the warping error (px) to be sensitive to depth errors. For investigation, we manually generate wrong depths for randomly sampled points and then analyze their warping errors in all datasets. Fig. 2(b) shows the results, which shows that the warping error in Original NYUv2 is about times smaller than that in KITTI when the sampled points have the same relative error. This indicates another challenge in indoor videos against driving scenes. Indeed, the issue is due to the fact that the sensitivity will be significantly decreased when the camera translation is small. Formally, when is close to , based on Eqn. 7, we have:
This causes the warping error hard to separate accurate/inaccurate depth estimates, confusing the depth CNN. We address this issue by translation-based image pairing (see Sec. 3.1). The results on Rectified NYUv2 demonstrate that the efficacy of our proposed method.
|KITTI Geiger et al. (2013)||Original NYUv2 Silberman et al. (2012)||Rectified NYUv2|
|R=, T=||R=, T=||R=, T=|
|(a) Inter-frame camera motion and warping flows|
|(b) Warping error with depth error|
The above analysis suggests that unsupervised depth learning frameworks favour image pairs those have small rotational and sufficient translational motions for training. However, unlike driving sequences, videos captured by handheld cameras tend to have more rotational while less translational motions, as shown in Fig. 2. In this section, we describe the proposed method to select image pairs with appropriate translation in Sec. 3.1, and reduce the rotation of selected pairs in Sec. 3.2.
For high frame rate videos (e.g., fps in NYUv2 Silberman et al. (2012)), we first downsample the raw videos temporally to remove redundant images, i.e., extract one key frame from every frames. Here, is used in NYUv2. The resulting data is denoted as the Original NYUv2 in all experiments. Then, instead of only considering adjacent frames as a pair, we pair up each image with its following frames We also let in NYUv2 Silberman et al. (2012). For each image pair candidate, we compute the relative camera pose by searching for feature correspondences and using the epipolar geometry Hartley and Zisserman (2003); Bian et al. (2019b). As the estimated pose is up-to-scale Hartley and Zisserman (2003), we use the translational flow (i.e., as the same as in Fig. 2(a)) instead of absolute translation distance for pairing. No ground-truth data is required in the proposed method.
First, we generate correspondences by using SIFT Lowe (2004) features. Then we apply the ratio test Lowe (2004) and GMS Bian et al. (2019a) to find good ones. Second, with the selected correspondences, we estimate the essential matrix using the five-point algorithm Nistér (2004) within a RANSAC Fischler and Bolles (1981) framework, and then we recover the relative camera pose. Third, for each image pair candidate, we compute the averaged magnitude of translational flows overall all inlier correspondences, which is as the same as in Fig. 2(a). Based on the distribution of warping flows on KITTI that is a good example for us, we empirically set the expected range as pixels. The out-of-range pairs are removed.
Although running Struecture-from-Motion (e.g., COLMAP Schonberger and Frahm (2016)) or VSLAM (e.g., ORB-SLAM Mur-Artal et al. (2015)) to compute relative camera poses is also possible, we argue that it is overkill for our problem. More importantly, these pipelines are often brittle, especially when processing videos with pure rotational motions and low-texture contents Parra Bustos et al. (2019). Compared with them, our method does not require a 3D map, and hence avoiding issues such as incomplete reconstruction and tracking lost.
In order to remove the rotational motion of selected pairs, we propose a weak rectification method. It warps two images to a common plane using the pre-computed rotation matrix . Specifically, (i) we fist convert to rotation vector using Rodrigues formula Trucco and Verri (1998) to obtain half rotation vectors for two images (i.e., and ), and then we convert them back to rotation matrices and . (ii) Given , , and camera intrinsic , we warp images to a new common image plane according to Eqn. 5. Then in the common plane, we crop their overlapped rectangular regions to obtain the weakly rectified pairs. See the Matlab pseudo code in the supplementary material.
Compared with the standard rectification Fusiello et al. (2000), our method only uses the rotation for image warping and deliberately ignores the translation , so our weakly rectified pairs have 3-DoF translational motions, while the rigorously rectified pairs have 1-DoF translational motions, i.e., corresponding points have identical vertical coordinates. The reason is that we have different input settings (i.e., temporal frames from arbitrary-motion videos vs. left and right images from two horizontal cameras) and different purposes (i.e., depth learning vs. stereo matching) with the latter.
On the one hand, due to the rigours 1-DoF requirement in stereo matching, the standard rectification Fusiello et al. (2000) suffers in forward-motion pairs, where the epipoles lie inside the image and cause heavy deformation, e.g., resulting in extremely large images Fusiello et al. (2000). Although polar rectification Pollefeys et al. (1999) can mitigate the issue to some extent, the results are still deformed. However, this issue is avoided in our 3-DoF weak rectification, as we do not constrain the translational motion. On the other hand, the rigorous 1-DoF rectification is indeed unnecessary for depth learning. For example, related methods Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a) work well in KITTI videos, where image pairs have 3-DoF translational motions, and the results are comparable to methods those training on KITTI stereo pairs Garg et al. (2016); Godard et al. (2017); Zhan et al. (2018). Moreover, these methods show that the Pose CNN predicted 3-DoF translation is quite accurate, which even outperforms ORB-SALM Mur-Artal et al. (2015) on short sequences (i.e., 5-frame segments).
Due to above reasons, we propose the 3-DoF weak rectification, which reduces the rectification requirement and more suits the unsupervised depth learning problem. In practice, we still let the Pose CNN to predict 6-DoF motions as all related works Zhou et al. (2017); Yin and Shi (2018); Ranjan et al. (2019); Godard et al. (2019); Bian et al. (2019a), where we use the predicted 3-DoF rotational motion to compensate the rotation residuals (see Fig. 2) caused by the imperfect rectification, and use the predicted 3-DoF translational motion to help train the depth CNN.
We use the updated SC-SfMLearner Bian et al. (2019a)
, publicly available on GitHub, as our unsupervised learning framework. Compared with the original version, it replaces the encoder of depth and pose CNNs with a ResNet-18He et al. (2016)
backbone to enable training from the ImagenetDeng et al. (2009) pre-trained model. Besides, to demonstrate that our proposed pre-processing is universal to different methods, we also experiment with Monodepth2 Godard et al. (2019) (ResNet-18 backbone) in ablation studies. For all methods, we use the default hyper-parameters, and train models for epochs.
The NYUv2 depth dataset Silberman et al. (2012) is composed of indoor video sequences recorded by a handheld Kinect RGB-D camera at resolution. The dataset contains 464 scenes taken from three cities. We use the officially provided 654 densely labeled images for testing, and use the rest sequences (no overlap with testing scenes) for training () and validation (). The raw training sequences contain images. It is first downsampled times to remove redundant frames, and then processed by using our proposed method, resulting in total rectified image pairs. The images are resized to resolution for training.
|Make3D Saxena et al. (2006)||✓||0.349||-||1.214||0.447||0.745||0.897|
|Depth Transfer Karsch et al. (2014)||✓||0.349||0.131||1.21||-||-||-|
|Liu et al. Liu et al. (2014)||✓||0.335||0.127||1.06||-||-||-|
|Ladicky et al. Ladicky et al. (2014)||✓||-||-||-||0.542||0.829||0.941|
|Li et al. Li et al. (2015)||✓||0.232||0.094||0.821||0.621||0.886||0.968|
|Roy et al. Roy and Todorovic (2016)||✓||0.187||0.078||0.744||-||-||-|
|Liu et al. Liu et al. (2016)||✓||0.213||0.087||0.759||0.650||0.906||0.976|
|Wang et al. Wang et al. (2015)||✓||0.220||0.094||0.745||0.605||0.890||0.970|
|Eigen et al. Eigen and Fergus (2015)||✓||0.158||-||0.641||0.769||0.950||0.988|
|Chakrabarti et al. Chakrabarti et al. (2016)||✓||0.149||-||0.620||0.806||0.958||0.987|
|Laina et al. Laina et al. (2016)||✓||0.127||0.055||0.573||0.811||0.953||0.988|
|Li et al. Li et al. (2017)||✓||0.143||0.063||0.635||0.788||0.958||0.991|
|DORN Fu et al. (2018)||✓||0.115||0.051||0.509||0.828||0.965||0.992|
|VNL Yin et al. (2019)||✓||0.108||0.048||0.416||0.875||0.976||0.994|
|Zhou et al. Zhou et al. (2019a)||✗||0.208||0.086||0.712||0.674||0.900||0.968|
|Zhao et al. Zhao et al. (2020)||✗||0.189||0.079||0.686||0.701||0.912||0.978|
|SC-SfMLearner Bian et al. (2019a)||Original||✗||0.188||0.079||0.666||0.712||0.918||0.973|
|Monodepth2 Godard et al. (2019)||Original||✗||0.213||0.088||0.713||0.662||0.902||0.972|
|Scenes||Training pairs||Before Fine-tuning||After Fine-tuning|
|AbsRel||Acc ()||AbsRel||Acc ()|
The dataset Shotton et al. (2013) contains 7 scenes, and each scene contains several video sequences (- frames per sequence), which are captured by a Kinect camera at resolution. We follow the official train/test split for each scene. For training, we use the proposed pre-processing, and for testing, we simply extract one image from every frames. We first pre-train the model on NYUv2 dataset, and then fine-tune the model on this dataset to demonstrate the universality of the proposed method.
We follow previous methods Liu et al. (2016); Laina et al. (2016); Fu et al. (2018); Yin et al. (2019) to evaluate depth estimators. Specifically, we use the mean absolute relative error (AbsRel), mean log10 error (Log10), root mean squared error (RMS), and the accuracy under threshold ( < , ). As unsupervised methods cannot recover the absolute scale, we multiply the predicted depth maps by a scalar that matches the median with the ground truth, as done in Zhou et al. (2017); Bian et al. (2019a); Godard et al. (2019).
Tab. 1 shows the results on NYUv2 Silberman et al. (2012). It shows that our method outperforms previous unsupervised SOTA method Zhao et al. (2020) by a large margin. Fig. 3 shows the qualitative depth results. Note that NYUv2 dataset is so challenging that previous unsupervised methods such as GeoNet Yin and Shi (2018) is unable to get reasonable results, as reported in Zhou et al. (2019a). Besides, our method also outperforms a series of fully supervised methods Liu et al. (2016); Saxena et al. (2006); Karsch et al. (2014); Liu et al. (2014); Ladicky et al. (2014); Li et al. (2015); Roy and Todorovic (2016); Wang et al. (2015); Eigen and Fergus (2015); Chakrabarti et al. (2016). However, it still has a gap between the SOTA supervised approach Yin et al. (2019).
Tab. 2 summarizes the results. First, for both SC-SfMLearner Bian et al. (2019a) and Monodepth2 Godard et al. (2019), training on our rectified data leads to significantly better results than on original data. It also demonstrates that the proposed pre-processing is independent to method chosen. Besides, note that the training is easy to collapse in original data, especially when starting from scratch. We here report the results for their successful case.
Tab. 3 shows the depth estimation results on 7 Scenes dataset Shotton et al. (2013). It shows that our model can generalize to previously unseen data, and fine-tuning on a few new data can boost the performance significantly. This has huge potentials to real-world applications, e.g., we can quickly adapt our pre-trained model to a new scene.
It takes () hours to train SC-SfMLearner Bian et al. (2019a) models for epochs on rectified (original) data, measured in a single 16GB NVIDIA V100 GPU. Learning curves are provided in the supplementary material, which show that our pre-processing enables faster convergence. The inference speed of models is about fps on images in a NVIDIA RTX 2080 GPU.
In this paper, we investigate the degenerate motion in indoor videos, and theoretically analyze its impact on the unsupervised monocular depth learning. We conclude that (i) rotational motion dominates translational motion in videos taken by handheld devices, and (ii) rotation behaves as noises while translation contributes effective signals to learning. Moreover, we propose a novel data pre-processing method, which searches for modestly translational pairs and remove their relative rotation for effective training. Comprehensive results in different datasets and learning frameworks demonstrate the efficacy of proposed method, and we establish a new unsupervised SOTA performance in challenging indoor NYUv2 dataset.
Self-supervised learning with geometric constraints in monocular video: connecting flow, depth, and camera. In IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
The cityscapes dataset for semantic urban scene understanding. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1.
First, we follow [56, 1] to pre-process KITTI  dataset, where static frames that are manually labelled by Eigen et al.  are removed from the raw video. The images are resized to . The accurate ground truth depth and camera poses are provided by a Velodyne laser scanner and a GPS localization system. Second, as the NYUv2  dataset does not provide the ground truth camera pose, we use the ORB-SLAM2  (RGB-D mode with the ground truth depth) to compute the camera trajectory. The image resolution is . We down-sample the raw videos by picking first image of every 10 times. Third, we randomly select one long sequence from the dataset for analysis. Given a sequence, we randomly sample valid points (with a good depth range) per image and compute their projection magnitudes (Fig. 2(a)) and projection errors (Fig. 2(b)) using the ground truth. For box visualization, we randomly sample points that are collected from the entire sequence.
First, for computing the feature correspondence, we use the SIFT  implementation by VLFeat library. The default parameters are used. Second, we use the built-in function in Matlab library to compute the essential matrix and relative camera pose. The maximum RANSAC  iterations are , and the inlier threshold is . we use the following pseudo Matlab code to compute the weakly rectified images.
function [ImRect1, ImRect2] = WeakRectify(Im1, Im2, K, R) % Function takes two images and their camera parameters as input, % and it returns the rectified images. % Im1, Im2: two images % K: camera intrinsic % R: relative rotation matrix % % ImRect1, ImRect2: two rectified images % Make the two image planes coplanar, by rotating each half way [R1, R2] = computeHalfRotations(R); H1 = projective2d(K * R1 / K); H2 = projective2d(K * R2 / K); % Compute the common rectangular area of the transformed images imageSize = size(Im1); [xBounds, yBounds] = computeOutputBounds(imageSize, H1, H2); % Rectify images ImRect1 = transformImage(Im1, H1, xBounds, yBounds); ImRect2 = transformImage(Im2, H2, xBounds, yBounds); end function [Rl, Rr] = computeHalfRotations(R) % Conver rotation matrix to vector representation r = rotationMatrixToVector(R); % Compute right half-rotation Rr = rotationVectorToMatrix(r / -2); % Compute left half-rotation Rl = Rr’; end
The following figure shows the validation loss when training on NYUv2 . "Rectified" stands for the proposed pre-processing, and "pt" stands for pre-training on ImageNet . It demonstrates that training on our rectified data leads to better results and faster convergence, compared with the original dataset.
Fig. 5 shows results. In NYUv2 , we train models on both original data and our rectified data. In 7 Scenes , we fine-tune the model that is pre-trained on NYUv2. The qualitative evaluation results demonstrate the efficacy and universality of our proposed pre-processing, and it demonstrates the generalization ability of pre-trained depth CNN in previously unseen scenes.
Fig. 6 shows the video screenshot. We predict depth using our trained model on one sequence (i.e., office) from 7 Scenes . The top shows the textured point cloud generated by the predicted depth map (bottom right) and source image (bottom left). The full video is attached along with this manuscript.