1 Introduction
With the recent proliferation of inexpensive RGBD sensors, it is now becoming practical for people to scan 3D models of large indoor environments with handheld cameras, enabling applications in cultural heritage, real estate, virtual reality, and many other fields. Most stateoftheart RGBD reconstruction algorithms either perform frametomodel alignment [1]
or match keypoints for global pose estimation
[2]. Despite the recent progress in these algorithms, registration of handheld RGBD scans remains challenging when local surface features are not discriminating and/or when scanning loops have little or no overlaps.An alternative is to detect planar features and associate them across frames with coplanarity, parallelism, and perpendicularity constraints [3, 4, 5, 6, 7, 8, 9]. Recent work has shown compelling evidence that planar patches can be detected and tracked robustly, especially in indoor environments where flat surfaces are ubiquitous. In cases where traditional features such as keypoints are missing (e.g., wall), there seems tremendous potential to support existing 3D reconstruction pipelines.
Even though coplanarity matching is a promising direction, current approaches lack strong perplane feature descriptors for establishing putative matches between disparate observations. As a consequence, coplanarity priors have only been used in the context of frametoframe tracking [3] or in postprocess steps for refining a global optimization [4]. We see this as analogous to the relationship between ICP and keypoint matching: just as ICP only converges with a good initial guess for pose, current methods for exploiting coplanarity are unable to initialize a reconstruction process from scratch due to the lack of discriminative coplanarity features.
This paper aims to enable global, ab initio coplanarity matching by introducing a discriminative feature descriptor for planar patches of RGBD images. Our descriptor is learned from data to produce features whose L2 difference is predictive of whether or not two RGBD patches from different frames are coplanar. It can be used to detect pairs of coplanar patches in RGBD scans without an initial alignment, which can be used to find loop closures or to provide coplanarity constraints for global alignment (see Figure 1).
A key novel aspect of this approach is that it focuses on detection of coplanarity rather than overlap. As a result, our plane patch features can be used to discover longrange alignment constraints (like “loop closures”) between distant, nonoverlapping parts of the same large surface (e.g., by recognizing carpets on floors, tiles on ceilings, paneling on walls, etc.). In Figure 1, the two patch pairs shown to the right helped produce a reconstruction with globally flat walls.
To learn our planar patch descriptor, we design a deep network that takes in color, depth, normals, and multiscale context for pairs of planar patches extracted from RGBD images, and predicts whether they are coplanar or not. The network is trained in a selfsupervised fashion where training examples are automatically extracted from coplanar and noncoplanar patches from ScanNet [10].
In order to evaluate our descriptor, we introduce a new coplanarity matching datasets, where we can see in series of thorough experiments that our new descriptor outperforms existing baseline alternatives by significant margins. Furthermore, we demonstrate that by using our new descriptor, we are able to compute strong coplanarity constraints that improve the performance of current global RGBD registration algorithms. In particular, we show that by combining coplanarity and pointbased correspondences reconstruction algorithms are able to handle difficult cases, such as scenes with a low number of features or limited loop closures. We outperform other stateoftheart algorithms on the standard TUM RGBD reconstruction benchmark [11]. Overall, the research contributions of this paper are:

A new task: predicting coplanarity of image patches for the purpose of RGBD image registration.

A selfsupervised process for training a deep network to produce features for predicting whether two image patches are coplanar or not.

An extension of the robust optimization algorithm [12] to solve camera poses with coplanarity constraints.

A new training and test benchmark for coplanarity prediction.

Reconstruction results demonstrating that coplanarity can be used to align scans where keypointbased methods fail to find loop closures.
2 Related Work
RGBD Reconstruction: Many SLAM systems have been described for reconstructing 3D scenes from RGBD video. Examples include KinectFusion [13, 1], VoxelHashing [14], ScalableFusion [15], Pointbased Fusion [16], Octrees on CPU [17], Elastic Fusion [18], Stereo DSO [19], Colored Registration [20], and Bundle Fusion [2]. These systems generally perform well for scans with many loop closures and/or when robust IMU measurements are available. However, they often exhibit drift in long scans when few constraints can be established between disparate viewpoints. In this work, we detect and enforce coplanarity constraints between planar patches to address this issue as an alternative feature channel for global matching.
Feature Descriptors: Traditionally, SLAM systems have utlized keypoint detectors and descriptors to establish correspondence constraints for camera pose estimation. Example keypoint descriptors include SIFT [21], SURF [22], ORB [23], etc. More recently, researchers have learned keypoint descriptors from data – e.g., MatchNet [24], Lift [25], SE3Nets [26], 3DMatch [27], Schmidt et al. [28]. These methods rely upon repeatable extraction of keypoint positions, which is difficult for widely disparate views. In contrast, we explore the more robust method of extracting planar patches without concern for precisely positioning the patch center.
Planar Features: Many previous papers have leveraged planar surfaces for RGBD reconstruction. The most commmon approach is to detect planes in RGBD scans, establish correspondences between matching features, and solve for the camera poses that align the corresponding features [29, 30, 31, 32, 33, 34, 35, 36]. More recent approaches build models comprising planar patches, possibly with geometric constraints [4, 37], and match planar features found in scans to planar patches in the models [4, 5, 6, 7, 8]. The search for correspondences is often aided by handtuned descriptors designed to detect overlapping surface regions. In contrast, our approach finds correspondences between coplanar patches (that may not be overlapping); we learn descriptors for this task with a deep network.
Global Optimization: For largescale surface reconstruction, it is common to use offline or asynchronously executed global registration procedures. A common formulation is to compute a pose graph with edges representing pairwise transformations between frames and then optimize an objective function penalizing deviations from these pairwise alignments [38, 39, 40]. Recent methods [12, 41] use indicator variables to identify loop closures or matching points during global optimization using a leastsquares formulation. We extend this formulation by setting indicator variables for individual coplanarity constraints.
3 Method
Our method consists of two components: 1) a deep neural network trained to generate a descriptor that can be used to discover coplanar pairs of RGBD patches without an initial registration, and 2) a global SLAM reconstruction algorithm that takes special advantage of detected pairs of coplanar patches.
3.1 Coplanarity Network
Coplanarity of two planar patches is by definition geometrically measurable. However, for two patches that are observed from different, yet unknown views, whether they are coplanar is not determinable based on geometry alone. Furthermore, it is not clear that coplanarity can be deduced solely from the local appearance of the imaged objects. We argue that the prediction of coplanarity across different views is a structural, or even semantic, visual reasoning task, for which neither geometry nor local appearance alone is reliable.
Humans infer coplanarity by perceiving and understanding the structure and semantics of objects and scenes, and contextual information plays a critical role in this reasoning task. For example, humans are able to differentiate different facets of an object, from virtually any view, by reasoning about the structure of the facets and/or by relating them to surrounding objects. Both involve inference with a context around the patches being considered, possibly at multiple scales. This motivates us to learn to predict crossview coplanarity from appearance and geometry, using multiscale contextual information. We approach this task by learning an embedding network that maps coplanar patches from different views nearby in feature space.
Network Design: Our coplanarity network (Figure 2 and 3) is trained with triplets of planar patches, each involving an anchor, a coplanar patch (positive) and a noncoplanar patch (negative), similar to [42]. Each patch of a triplet is fed into a convolutional network based on ResNet50 [43]
for feature extraction, and a triplet loss is estimated based on the relative proximities of the three features. To learn coplanarity from both appearance and geometry, our network takes multiple channels as input: an RGB image, depth image, and normal map.
We encode the contextual information of a patch at two scales, local and global. This is achieved by cropping the input images (in all channels) to rectangles of and
times the size of the patch’s bounding box, respectively. All cropped images are clamped at image boundaries, padded to a square, and then resized to
. The padding uses 50% gray for RGB images and a value of for depth and normal maps; see Figure 3.To make the network aware of the region of interest (as opposed to context) in each input image, we add, for each of the two scales, an extra binary mask channel. The local mask is binary, with the patch of interest in white and the rest of the image in black. The global mask, in contrast, is continuous, with the patch of interest in white and then a smooth decay to black outside the patch boundary. Intuitively, the local mask helps the network distinguish the patch of interest from the closeby neighborhood, e.g. other sides on the same object. The global mask, on the other hand, directs the network to learn global structure by attending to a larger context, with importance smoothly decreasing based on distance to the patch region. Meanwhile, it also weakens the effect of specific patch shape, which is unimportant when considering global structure.
In summary, each scale consists of RGB, depth, normal, and mask channels. These inputs are first encoded independently. Their feature maps are concatenated after the th convolutional layer, and then pass through the remaining layers. The local and global scales share weights for the corresponding channels, and their outputs are finally combined with a fully connected layer (Figure 3).
Network Training: The training data for our network are generated from datasets of RGBD scans of 3D indoor scenes, with highquality camera poses provided with the datasets. For each RGBD frame, we segment it into planar patches using agglomerative clustering on the depth channel. For each planar patch, we also estimate its normal based on the depth information. The extracted patches are projected to image space to generate all the necessary channels of input to our network. Very small patches, whose local mask image contains less than pixels with valid depths, are discarded.
Triplet Focal Loss: When preparing triplets to train our network, we encounter the wellknown problem of a severely imbalanced number of negative and positive patch pairs. Given a training sequence, there are many more negative pairs, and most of them are too trivial to help the network learn efficiently. Using randomly sampled triplets would overwhelm the training loss by the easy negatives.
We opt to resolve the imbalance issue by dynamically and discriminatively scaling the losses for hard and easy triplets, inspired by the recent work of focal loss for object detection [44]. Specifically, we propose the triplet focal loss:
(1) 
where , and are the feature maps extracted for anchor, positive, and negative patches, respectively; , with being the L2 distance between two patch features. Minimizing this loss encouranges the anchor to be closer to the positive patch than to the negative in descriptor space, but with less weight for larger distances.
See Figure 4, left, for a visualization of the loss function with . When , this loss becomes the usual margined loss, which gives nonnegligible loss to easy examples near the margin . When , however, we obtain a focal loss that downweights easytolearn triplets while keeping high loss for hard ones. Moreover, it smoothly adjusts the rate at which easy triplets are downweighted. We found to achieve the best training efficiency (Figure 4, right). Figure 5 shows a tSNE visualization of coplanaritybased patch features.
3.2 CoplanarityBased Robust Registration
To investigate the utility of this planar patch descriptor and coplanarity detection approach for 3D reconstruction, we have developed a global registration algorithm that estimates camera poses for an RGBD video using pairwise constraints derived from coplanar patch matches in addition to keypoint matches.
Our formulation is inspired by the work of Choi et al. [12], where the key feature is the robust penalty term used for automatically selecting the correct matches from a large pool of hypotheses, thus avoiding iterative rematching as in ICP. Note that this formulation does not require an initial alignment of camera poses, which would be required for other SLAM systems that leverage coplanarity constraints.
Given an RGBD video sequence , our goal is to compute for each frame a camera pose in the global reference frame, , that brings them into alignment. This is achieved by jointly aligning each pair of frames that were predicted to have some set of coplanar patches, . For each pair , let us suppose w.l.o.g. that patch is from frame and from . Meanwhile, let us suppose a set of matching keypoint pairs is detected and matched between frame and . Similarly, we assume for each point pair that keypoint is from frame and from .
Objective Function: The objective of our coplanaritybased registration contains four terms, responsible for coplanar alignment, coplanar patch pair selection, keypoint alignment, and keypoint pair selection:
(2) 
Given a pair of coplanar patches predicted by the network, the coplanarity data term enforces the coplanarity, via minimizing the pointtoplane distance from sample points on one patch to the plane defined by the other patch:
(3) 
where is the coplanarity distance of a patch pair . It is computed as the rootmeansquare pointtoplane distance over both sets of sample points:
where is the set of sample points on patch and is pointtoplane distance:
is the plane defined by patch , which is estimated in the global reference frame using the corresponding transformation , and is updated in every iteration. is a control variable (in ) for the selection of patch pair , with standing for selected and for discarded. is a weight that measures the confidence of pair ’s being coplanar. This weight is another connection between the optimization and the network, besides the predicted patch pairs themselves. It is computed based on the feature distance of two patches, denoted by , extracted by the network: where is the maximum feature distance and .
The coplanarity regularization term is defined as:
(4) 
where the penalty function is defined as . Intuitively, minimizing this term together with the data term encourages the selection of pairs incurring a small value for the data term, while immediately pruning those pairs whose data term value is too large and deemed to be hard to minimize. is defined the same as before, and is a weighting variable that controls the emphasis on pair selection.
The keypoint data term is defined as:
(5) 
Similar to coplanarity, a control variable is used to determine the selection of point pair , subjecting to the keypoint regularization term:
(6) 
where shares the same weighting variable with Equation (4).
Optimization: The optimization of Equation (7) is conducted iteratively, where each iteration interleaves the optimization of transformations and selection variables . Ideally, the optimization could take every pair of frames in a sequence as an input for global optimization. However, this is prohibitively expensive since for each frame pair the system scales with the number of patch pairs and keypoint pairs. To alleviate this problem, we split the sequence into a list of overlapping fragments, optimize frame poses within each fragment, and then perform a final global registration of the fragments, as in [12].
For each fragment, the optimization takes all frame pairs within that fragment and registers them into a rigid point cloud. After that, we take the matching pairs that have been selected by the intrafragment optimization, and solve the interfragment registration based on those pairs. Interfragment registration benefits more from longrange coplanarity predictions.
The putative matches found in this manner are then pruned further with a rapid and approximate RANSAC algorithm applied for each pair of fragments. Given a pair of fragments, we randomly select a set of three matching feature pairs, which could be either planarpatch or keypoint pairs. We compute the transformation aligning the selected triplet, and then estimate the “support” for the transformation by counting the number of putative match pairs that are aligned by the transformation. For patch pairs, alignment error is measures by the rootmeansquare closest distance between sample points on the two patches. For keypoint pairs, we simply use the Euclidean distance. Both use the same threshold of cm. If a transformation is found to be sufficiently supported by the matching pairs (more than consensus), we include all the supporting pairs into the global optimization. Otherwise, we simply discard all putative matches.
Once a set of pairwise constraints have been established in this manner, the frame transformations and pair selection variables are alternately optimized with an iterative process using Ceres [45] for the minimization of the objective function at each iteration. The iterative optimization converges when the relative value change of each unknown is less than . At a convergence, the weighting variable , which was initialized to m in the beginning, is decreased by half and the above iterative optimization continues. The whole process is repeated until is lower than m, which usually takes less than iterations. The supplementary material provides a study of the optimization behavior, including convergence and robustness to incorrect pairs.
4 Results and Evaluations
4.1 Training Set, Parameters, and Timings
Our training data is generated from the ScanNet [10] dataset, which contains scanned sequences of indoor scenes, reconstructed by BundleFusion [2]. We adopt the training/testing split provided with ScanNet and the training set ( scenes) are used to generate our training triplets. Each training scene contributes triplets. About triplets in total are generated from all training scenes. For evaluating our network, we build a coplanarity benchmark using scenes from the testing set. For hierarchical optimization, the fragment size is , with a frame overlap between adjacent fragments. The network training takes about hours to converge. For a sequence of frames with fragments and patches per frame, the running time is minutes for coplanarity prediction ( second per patch pair) and minutes for optimization ( minutes for intrafragment and minutes for interfragment).
4.2 Coplanarity Benchmark
We create a benchmark COP for evaluating RGBDbased coplanarity matching of planar patches. The benchmark dataset contains patch pairs with groundtruth coplanarity, which are organized according to the physical size/area of patches (COPS) and the centroid distance between pairs of patches (COPD). COPS contains patch pairs which are split uniformly into three subsets with decreasing average patch size, where the patch pairs are sampled at random distances. COPD comprises three subsets (each containing pairs) with increasing
average pair distance but uniformly distributed patch size. For all subsets, the numbers of positive and negative pairs are equal. Details of the benchmark are provided in the supplementary material.
4.3 Network Evaluation
Our network is the first, to our knowledge, that is trained for coplanarity prediction. Therefore, we perform comparison against baselines and ablation studies. See visual results of coplanarity matching in the supplementary material.
Comparing to Baseline Methods: We first compare to two handcrafted descriptors, namely the color histogram within the patch region and the SIFT feature at the patch centroid. For the task of keypoint matching, a commonly practiced method (e.g., in [46]) is to train a neural network that takes image patches centered around the keypoints as input. We extend this network to the task of coplanarity prediction, as a nontrivial baseline. For a fair comparison, we train a triplet network with ResNet50 with only one tower per patch taking three channels (RGB, depth, and normal) as input. For each channel, the image is cropped around the patch centroid, with the same padding and resizing scheme as before. Thus, no mask is needed since the target is always at the image center. We train two networks with different triplets for the task of 1) exact center point matching and 2) coplanarity patch matching, respectively.
The comparison is conducted over COPS and the results of precisionrecall are plotted in Figure 6. The handcrafted descriptors fail on all tests, which shows the difficulty of our benchmark datasets. Compared to the two alternative centerpointbased networks (point matching and coplanarity matching), our method performs significantly better, especially on larger patches.
Ablation Studies: To investigate the need for the various input channels, we compare our full method against that with the RGB, depth, normal, or mask input disabled, over the COP benchmark. To evaluate the effect of multiscale context, our method is also compared to that without local or global channels. The PR plots in Figure 7 show that our full method works the best for all tests.
From the experiments, several interesting phenomena can be observed. First, the order of overall importance of the different channels is: mask normal RGB depth. This clearly shows that coplanarity prediction across different views can neither rely on appearance or geometry alone. The important role of masking in concentrating the network’s attention is quite evident. We provide a further comparison to justify our specific masking scheme in the supplementary material. Second, the global scale is more effective for bigger patches and more distant pairs, for which the larger scale is required to encode more context. The opposite goes for the local scale due the higher resolution of its input channels. This verifies the complementary effect of the local and global channels in capturing contextual information at different scales.
4.4 Reconstruction Evaluation
Quantitative Results: We perform a quantitative evaluation of reconstruction using the TUM RGBD dataset by [11], for which groundtruth camera trajectories are available. Reconstruction error is measured by the absolute trajectory error (ATE), i.e., the rootmeansquare error (RMSE) of camera positions along a trajectory. We compare our method with six stateoftheart reconstruction methods, including RGBD SLAM [47], VoxelHashing [14], ElasticFusion [18], Redwood [12], BundleFusion [2], and FinetoCoarse [4]. Note that unlike the other methods, Redwood does not use color information. FinetoCoarse is the most closely related to our method, since it uses planar surfaces for structurallyconstrained registration. This method, however, relies on a good initialization of camera trajectory to bootstrap, while our method does not. Our method uses SIFT features for keypoint detection and matching. We also implement an enhanced version of our method where the keypoint matchings are prefiltered by BundleFusion (named ‘BundleFusion+Ours’).
As an ablation study, we implement five baseline variants of our method. 1) ‘Coplanarity’ is our optimization with only coplanarity constraints. Without keypoint matching constraint, our optimization can sometimes be underdetermined and needs reformulation to achieve robust registration when not all degrees of freedom (DoFs) can be fixed by coplanarity. The details on the formulation can be found in the supplementary material. 2) ‘Keypoint’ is our optimization with only SIFT keypoint matching constraints. 3) ‘No D. in RANSAC’ stands for our method where we did not use our learned patch descriptor during the voting in frametoframe RANSAC. In this case, any two patch pairs could cast a vote if they are geometrically aligned by the candidate transformation. 4) ‘No D. in Opt’ means that the optimization objective for coplanarity is not weighted by the matching confidence predicted by our network (
in Equation (3) and (4)). 5) ‘No D. in Both’ is a combination of 3) and 4).


Table 1 reports the ATE RMSE comparison. Our method achieves stateoftheart results for the first three TUM sequences (the fourth is a flat wall). This is achieved by exploiting our longrange coplanarity matching for robust largescale loop closure, while utilizing keypoint based matching to pin down the possible free DoFs which are not determinable by coplanarity. When being combined with BundleFusion keypoints, our method achieves the best results over all sequences. Therefore, our method complements the current stateoftheart methods by providing a means to handle limited frametoframe overlap.
The ablation study demonstrates the importance of our learned patch descriptor in our optimization – i.e., our method performs better than all variants that do not include it. It also shows that coplanarity constraints alone are superior to keypoints only for all sequences except the flat wall (fr3/nst). Using coplanar and keypoint matches together provides the best method overall.
Qualitative Results: Figure 8 shows visual comparisons of reconstruction on sequences from ScanNet [10] and new ones scanned by ourselves. We compare reconstruction results of our method with a stateoftheart keypoint based method (BundleFusion) and a planarstructurebased method (FinetoCoarse). The low frame overlap makes the keypoint based loopclosure detection fail in BundleFusion. Lost tracking of successive frames provides a poor initial alignment for FinetoCoarse, causing it to fail. In contrast, our method can successfully detect nonoverlapping loop closures through coplanar patch pairs and achieve good quality reconstructions for these examples without an initial registration. More visual results are shown in the supplementary material.
Effect of LongRange Coplanarity. To evaluate the effect of longrange coplanarity matching on reconstruction quality, we show in Figure 9 the reconstruction results computed with all, half, and none of the longrange coplanar pairs predicted by our network. We also show a histogram of coplanar pairs survived the optimization. From the visual reconstruction results, the benefit of longrange coplanar pairs is apparent. In particular, the larger scene (bottom) benefits more from longrange coplanarity than the smaller one (top). In Figure 8, we also give the number of nonoverlapping coplanar pairs after optimization, showing that longrange coplanarity did help in all examples.
5 Conclusion
We have proposed a new planar patch descriptor designed for finding coplanar patches without a priori global alignment. At its heart, the method uses a deep network to map planar patch inputs with RGB, depth, and normals to a descriptor space where proximity can be used to predict coplanarity. We expect that deep patch coplanarity prediction provides a useful complement to existing features for SLAM applications, especially in scans with large planar surfaces and little interframe overlap.
5.0.1 Acknowledgement
We are grateful to Min Liu, Zhan Shi, Lintao Zheng, and Maciej Halber for their help on data preprocessing. We also thank Yizhong Zhang for the early discussions. This work was supported in part by the NSF (VEC 1539014/ 1539099, IIS 1421435, CHS 1617236), NSFC (61532003, 61572507, 61622212), Google, Intel, Pixar, Amazon, and Facebook. Yifei Shi was supported by the China Scholarship Council.
References
 [1] Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: KinectFusion: Realtime 3D reconstruction and interaction using a moving depth camera. In: Proc. UIST. (2011) 559–568
 [2] Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., Theobalt, C.: BundleFusion: Realtime globally consistent 3d reconstruction using onthefly surface reintegration. ACM Trans. on Graph. 36(3) (2017) 24
 [3] Zhang, Y., Xu, W., Tong, Y., Zhou, K.: Online structure analysis for realtime indoor scene reconstruction. ACM Transactions on Graphics (TOG) 34(5) (2015) 159
 [4] Halber, M., Funkhouser, T.: Finetocoarse global registration of rgbd scans. arXiv preprint arXiv:1607.08539 (2016)
 [5] Lee, J.K., Yea, J.W., Park, M.G., Yoon, K.J.: Joint layout estimation and global multiview registration for indoor reconstruction. arXiv preprint arXiv:1704.07632 (2017)
 [6] Ma, L., Kerl, C., Stückler, J., Cremers, D.: Cpaslam: Consistent planemodel alignment for direct rgbd slam. In: Robotics and Automation (ICRA), 2016 IEEE International Conference on, IEEE (2016) 1285–1291
 [7] Trevor, A.J., Rogers, J.G., Christensen, H.I.: Planar surface slam with 3d and 2d sensors. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on, IEEE (2012) 3041–3048
 [8] Zhang, E., Cohen, M.F., Curless, B.: Emptying, refurnishing, and relighting indoor spaces. ACM Transactions on Graphics (TOG) 35(6) (2016) 174
 [9] Huang, J., Dai, A., Guibas, L., Nießner, M.: 3dlite: Towards commodity 3d scanning for content creation. ACM Transactions on Graphics 2017 (TOG) (2017)
 [10] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richlyannotated 3d reconstructions of indoor scenes. In: CVPR. (2017)
 [11] Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of rgbd slam systems. In: Proc. IROS. (Oct. 2012)

[12]
Choi, S., Zhou, Q.Y., Koltun, V.:
Robust reconstruction of indoor scenes.
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 5556–5565
 [13] Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Realtime dense surface mapping and tracking. In: Proc. ISMAR. (2011) 127–136
 [14] Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Realtime 3d reconstruction at scale using voxel hashing. ACM TOG (2013)
 [15] Chen, J., Bautembach, D., Izadi, S.: Scalable realtime volumetric surface reconstruction. ACM TOG 32(4) (2013) 113
 [16] Keller, M., Lefloch, D., Lambers, M., Izadi, S., Weyrich, T., Kolb, A.: Realtime 3d reconstruction in dynamic scenes using pointbased fusion. In: Proc. 3DV, IEEE (2013) 1–8
 [17] Steinbruecker, F., Sturm, J., Cremers, D.: Volumetric 3d mapping in realtime on a cpu. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), Hongkong, China (2014)
 [18] Whelan, T., Leutenegger, S., SalasMoreno, R.F., Glocker, B., Davison, A.J.: ElasticFusion: Dense SLAM without a pose graph. In: Proc. RSS, Rome, Italy (July 2015)
 [19] Wang, R., Schwörer, M., Cremers, D.: Stereo dso: Largescale direct sparse visual odometry with stereo cameras. arXiv preprint arXiv:1708.07878 (2017)
 [20] Park, J., Zhou, Q.Y., Koltun, V.: Colored point cloud registration revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 143–152
 [21] Lowe, D.G.: Object recognition from local scaleinvariant features. In: Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Volume 2., Ieee (1999) 1150–1157
 [22] Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. Computer vision–ECCV 2006 (2006) 404–417
 [23] Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: Computer Vision (ICCV), 2011 IEEE international conference on, IEEE (2011) 2564–2571
 [24] Han, X., Leung, T., Jia, Y., Sukthankar, R., Berg, A.C.: Matchnet: Unifying feature and metric learning for patchbased matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 3279–3286
 [25] Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: Lift: Learned invariant feature transform. In: European Conference on Computer Vision, Springer (2016) 467–483
 [26] Byravan, A., Fox, D.: Se3nets: Learning rigid body motion using deep neural networks. In: Robotics and Automation (ICRA), 2017 IEEE International Conference on, IEEE (2017) 173–180
 [27] Zeng, A., Song, S., Niessner, M., Fisher, M., Xiao, J., Funkhouser, T.: 3DMatch: Learning local geometric descriptors from rgbd reconstructions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 1802–1811
 [28] Schmidt, T., Newcombe, R., Fox, D.: Selfsupervised visual descriptor learning for dense correspondence. IEEE Robotics and Automation Letters 2(2) (2017) 420–427
 [29] Concha, A., Civera, J.: Dpptam: Dense piecewise planar tracking and mapping from a monocular sequence. In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE (2015) 5686–5693
 [30] Dou, M., Guan, L., Frahm, J.M., Fuchs, H.: Exploring highlevel plane primitives for indoor 3d reconstruction with a handheld rgbd camera. In: Asian Conference on Computer Vision, Springer (2012) 94–108
 [31] Hsiao, M., Westman, E., Zhang, G., Kaess, M.: Keyframebased dense planar slam. In: Proc. International Conference on Robotics and Automation (ICRA), IEEE. (2017)

[32]
Pietzsch, T.:
Planar features for visual slam.
KI 2008: Advances in Artificial Intelligence (2008) 119–126
 [33] Proença, P.F., Gao, Y.: Probabilistic combination of noisy points and planes for rgbd odometry. arXiv preprint arXiv:1705.06516 (2017)
 [34] SalasMoreno, R.F., Glocken, B., Kelly, P.H., Davison, A.J.: Dense planar slam. In: Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on, IEEE (2014) 157–164
 [35] Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: Pointplane slam for handheld 3d sensors. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on, IEEE (2013) 5182–5189
 [36] Weingarten, J., Siegwart, R.: 3d slam using planar segments. In: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, IEEE (2006) 3062–3067
 [37] Stuckler, J., Behnke, S.: Orthogonal wall correction for visual motion estimation. In: Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, IEEE (2008) 1–6
 [38] Grisetti, G., Kummerle, R., Stachniss, C., Burgard, W.: A tutorial on graphbased slam. IEEE Intelligent Transportation Systems Magazine 2(4) (2010) 31–43
 [39] Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: Rgbd mapping: Using depth cameras for dense 3d modeling of indoor environments. In: In the 12th International Symposium on Experimental Robotics (ISER, Citeseer (2010)
 [40] Zhou, Q.Y., Koltun, V.: Dense scene reconstruction with points of interest. ACM Transactions on Graphics (TOG) 32(4) (2013) 112
 [41] Zhou, Q.Y., Park, J., Koltun, V.: Fast global registration. In: European Conference on Computer Vision, Springer (2016) 766–782

[42]
Schroff, F., Kalenichenko, D., Philbin, J.:
Facenet: A unified embedding for face recognition and clustering.
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 815–823  [43] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 770–778
 [44] Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. arXiv preprint arXiv:1708.02002 (2017)
 [45] Agarwal, S., Mierle, K.: Ceres solver: Tutorial & reference. Google Inc 2 (2012) 72
 [46] Chang, A., Dai, A., Funkhouser, T., , Nießner, M., Savva, M., Song, S., Zeng, A., Zhang, Y.: Matterport3d: Learning from rgbd data in indoor environments. In: Proceedings of the International Conference on 3D Vision (3DV). (2017)
 [47] Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An evaluation of the rgbd slam system. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on, IEEE (2012) 1691–1696
 [48] Xiao, J., Owens, A., Torralba, A.: Sun3d: A database of big spaces reconstructed using sfm and object labels. In: Proc. ICCV, IEEE (2013) 1625–1632
 [49] Gelfand, N., Ikemoto, L., Rusinkiewicz, S., Levoy, M.: Geometrically stable sampling for the ICP algorithm. In: Proceedings of the International Conference on 3D Digital Imaging and Modeling (3DIM). (2003) 260–267
6 Outline
In this supplemental material, we provide the following additional information and results:

Section 7 provides an overview of the dataset of our coplanarity benchmark (COP).

Section 9 provides more evaluations of the reconstruction algorithm. Specifically, we first evaluate the robustness of the registration against the initial ratio of incorrect pairs (Section 9.1). We then compare the reconstruction performance of two alternative optimization strategies (Section 9.2). Finally, we show more visual results of reconstructions for scenes from various datasets (Section 9.3).

Section 10 discusses the limitations of our method.
7 COP Benchmark Dataset
Figure 10 and 11 provide an overview of our coplanarity benchmark datasets, COPS (organized in decreasing patch size) and COPD (in increasing patch distance), respectively. For each subset, we show both positive and negative pairs, each with two pairs. Note how nontrivial the negative pairs are in our dataset, for example, the negative pairs of S3 and D1.
8 Network Evaluations
This section provides further studies and evaluations of the performance of our coplanarity prediction network.
8.1 Different Masking Schemes
We first investigate several alternative masking schemes for the local and global inputs of our coplanarity network. The proposed masking scheme is summarized as follows (see Figure 12 (right)). The local mask is binary, with the patch of interest in white and the rest of the image in black. The global mask, in contrast, is continuous, with the patch of interest in white and then a smooth decay to black outside the patch boundary.
We compare in Figure 12 our masking scheme (global decay) with several alternatives including 1) using distancebased decaying for both local and global scale (both decay), 2) using distancebased decaying only for local scale (local decay), 3) without decaying for either scale (no decay), and 4) without using a mask at all (no mask). Over the entire COPD benchmark dataset, we test the above methods and plot the PR curves. The results demonstrate the advantage of our specific design choice of masking scheme (using decaying for global scale but not for local).
8.2 Performance on Patches Proposed during Reconstructions
Our second study investigates the network performance for a realistic balance of positive and negative patch pairs. The performance of our coplanarity network has so far been evaluated over the COP benchmark dataset, which contains comparable numbers of positive and negative examples. To evaluate its performance in a reconstruction setting, we test on patch pairs proposed during the reconstruction of two scenes (the full sequence of ‘fr1/desk’ and ‘fr2/xyz’ from the TUM dataset). The groundtruth coplanarity matching is detected based on the groundtruth alignment provided with the TUM dataset.
Figure 13 shows the plot of PR curves for both intra and interfragment reconstructions. The values for intrafragment are averaged over all fragments. For patches from the real case of scene reconstruction, our network achieves a precision of , when the recall rate is . This accuracy is sufficient for our robust optimization for frame registration, which can be seen from the evaluation in Figure 15; see Section 9.1.
8.3 More Visual Results of Coplanarity Matching
Figure 14 shows some visual results of coplanarity matching. Given a query patch in one frame, we show all patches in another frame, which are colorcoded with the dissimilarity predicted by our coplanarity network (blue is small and red is large). The results show that our network produces correct coplanarity embedding, even for patches observed across many views.
9 Reconstruction Evaluations
9.1 Robustness to Initial Coplanarity Accuracy
To evaluate the robustness of our optimization for coplanaritybased alignment, we inspect how tolerant the optimization is to the initial accuracy of the coplanarity prediction. In Figure 15, we plot the reconstruction error of our method on two sequences (full) from TUM dataset, with varying ratio of incorrect input pairs. In our method, given a pair of patches, if their feature distance in the embedding space is smaller than , it is used as a hypothetical coplanar pair being input to the optimization. The varying incorrect ratios are thus obtained via gradually introducing more incorrect predictions by adjusting the feature distance threshold.
Reconstruction error is measured by the absolute trajectory error (ATE), i.e., the rootmeansquare error (RMSE) of camera positions along a trajectory. The results demonstrate that our method is quite robust against the initial precision of coplanarity matching, for both intra and interfragment reconstructions. In particular, the experiments show that our method is robust for a precision (incorrect ratio of ), while keeping the recall rate no lower than .
9.2 Performance of Different Optimization Strategies
An alternative strategy for solving Equation (2) is to optimize transformations and selection variables jointly. In Figure 16, we report the reconstruction error over iterations on two sequences (’fr1/desk’ and ’fr2/xyz’) in the TUM dataset, for both alternating optimization and joint optimization. The reconstruction error is calculated by averaging the ATE of the intrafragment reconstructions and the interfragment reconstructions. The result shows that alternating optimization achieves better convergence performance. Similar results can be observed on other sequences from the TUM dataset.
9.3 More Visual Results of Reconstruction
Figure 17 shows more visual results of reconstruction on sequences, including from the ScanNet dataset [10] and new ones scanned by ourselves. The sequences scanned by ourselves have very sparse loop closure due the missing parts. Our method works well for all these examples. Figure 18 shows the reconstruction of sequences from the Sun3D dataset [48]. Since the registration of Sun3D sequences is typically shown without fusion in previous works (e.g., [48, 4]), we only show the point clouds. Figure 19 shows two reconstruction examples on scene contains multiple rooms scanned by ourselves.
10 Limitations, Failure Cases and Future work
Our work has several limitations, which suggest topics for future research.
First, coplanarity correspondences alone are not always enough to constrain camera poses uniquely in some environments – e.g., the pose of a camera viewing only a single flat wall will be underconstrained. Therefore, coplanarity is not a replacement for traditional features, such as keypoints, lines, etc.; rather, we argue that coplanarity constraints provide additional signal and constraints which are critical in many scanning scenarios, thus helping to improve the reconstruction results. This becomes particularly obvious in scans with a sparse temporal sampling of frames.
Second, for the cases where shortrange coplanar patches dominate longrange ones (e.g., a bending wall), our method could reconstruct an overly flat surface due to the coplanarity regularization by false positive coplanar patch pairs between adjacent frames. For example, in Figure 20, we show a tea room scanned by ourselves. The top wall is not flat, but the false positive coplanar pairs detected between adjacent frames could overregularize the registration, making it mistakenly flattened. This in turn causes the loop cannot be closed at the wall in the bottom.
Third, since the network prediction relies on the information of color, depth and normal, the prediction results could be wrong when the inputs fail to provide sufficient information on coplanarity. For example, two white walls without any context will always be predicted to be coplanar although they could be not.
Last, our optimization is currently a computational bottleneck – it takes approximately minutes to perform the robust optimization in typical scans shown in the paper. Besides exploiting the highly parallelizable intrafragment registrations, a more efficient optimization is a worthy direction for future investigation. We are also interested in testing our method on a broader range of RGBD datasets (e.g. the dataset in [20]).
11 Coplanarityonly Robust Registration
At lines 526529 of the main paper and in Table 1b, we provide an ablation study in which our method is compared to a variant (called “Coplanarity only”) that uses only predicted matches of coplanar patches to constrain camera poses – i.e., without keypoint matches. In order to produce that one comparative result, we implemented an augmented version of our algorithm that includes a new method for selecting coplanar patch pairs in order to increase the chances of fully constraining the camera pose DoFs with the seletected patch pairs. The following subsections describe that version of the algorithm. Although it is not part of our method usually, we describe it in full here for the sake of reproducibility of the “Coplanarity only” comparison provided in the paper.
11.1 Formulation
Objective Function:
The objective of coplanarityonly registration contains three terms, including the coplanarity data term (Equation (3) of the main paper), the coplanarity regularization term (Equation (4) of the main paper), and a newly introduced frame regularization term for regularizing the optimization based on the assumption that the transformation between adjacent frames is small:
(7) 
The frame regularization term makes sure the system is always solvable, by weakly constraining the transformations of adjacent frames to be as close as possible:
(8) 
where is a sparse set of points sampled from frame . is set to by default.
When using coplanarity constraints only (without keypoints), our coplanaritybased alignment may become underdetermined or unstable along some DoF, when there are too few coplanar patch pairs that can be used to pin down that DoF. In this case, we must be more willing to keep pairs constraining that DoF, to keep the system stable. To this end, we devise an anisotropic control variable, , for patch pair pruning: If some DoF is detected to be unstable and enforcing and to be coplanar can constrain it, we set
to be large. The alignment stability is estimated by analyzing the eigenvalues of the 6DoF alignment error covariance matrix (gradient of the pointtoplane distances w.r.t. the six DoFs in
and ) as in [49](See details in Section 11.3). Since the stability changes during the optimization, should be updated dynamically, and we describe an optimization scheme with dynamically updated below.11.2 Optimization
The optimization process is given in Algorithm 1. The core part is solving Equation (7) via alternating optimization of transformations and selection variables (the inner loop in Line 1~1). The iterative process converges when the relative value change of each unknown is less than , which usually takes less than iterations.
A key step of the optimization is stability analysis and stabilitybased anisotropic pair pruning (Line 1~1). Since our coplanaritybased alignment is inherently orientationbased, it suffices to inspect the stability of the three translational DoFs. Given a frame , we estimate its translational stability values, denoted by ( is one of the labels of X, Y, and Zaxis), based on the alignment of all frame pairs involving (see Section 11.3 for details). One can check the stability of frame along DoF by examining whether the stability value is greater than a threshold .
Stabilitybased anisotropic pair pruning is achieved by dynamically setting the pruning parameter for a patch pair, in the coplanarity regularization term (Equation (4) of the main paper). To this end, we set for each frame and each DoF an independent pruning parameter: ( and ). They are uniformly set to a relatively large initial value (m), and are decreased in each outer loop to gradually allow more pairs to be pruned. For some , however, if its corresponding stability value is lower than , it stops decreasing to avoid unstableness. At any given time, the pruning parameter , with , is set to:
where is the DoF closest to the normal of patch . The whole process terminates when the stability of all DoFs becomes less than .
To demonstrate the capability of our optimization to prune incorrect patch pairs, we plot in Figure 21 the ratio of correct coplanarity matches at each iteration step for a groundtruth set. We treat a pair as being kept if its selection variable and discarded otherwise. With more and more incorrect pairs pruned, the ratio increases while the registration error (measured by absolute camera trajectory error (ATE); see Section 4 of the paper) decreases.
11.3 Stability Analysis
The stability analysis of coplanar alignment is inspired by the work of Gelfand et al. [49] on geometrically stable sampling for pointtoplane ICP. Consider the pointtoplane alignment problem found in the data term of our coplanaritybased registration (see Equation (3) in the main paper). Let us assume we have a collection of points sampled from patch , and a plane defined by patch . We want to determine the optimal rotation and translation to be applied to the point set , to bring them into coplanar alignment with the plane . In our formulation, source and target patches ( and ) are also exchanged to compute alignment error bilaterally (see Line 436 in paper). Below we use only patch as the source for simplicity of presentation.
We want to minimize the alignment error
(9) 
with respect to the rotation and translation .
The rotation is nonlinear, but can be linearized by assuming that incremental rotations will be small:
(10) 
for rotations , , and around the X, Y, and Z axes, respectively. This is equivalent to treating the transformation of
as a displacement by a vector
, where . Substituting this into Equation (9), we therefore aim to find a 6vector that minimizes:(11) 
We solve for the aligning transformation by taking partial derivatives of Equation (11) with respect to the transformation parameters in and . This results in a linear system where and is the residual vector. is a “covariance matrix” of the rotational and translational components, accumulated from the sample points:
This covariance matrix encodes the increase in the alignment error due to the movement of the transformation parameters from their optimum:
(12) 
The larger this increase, the greater the stability of the alignment, since the error landscape will have a deep, welldefined minimum. On the other hand, if there are incremental transformations that cause only a small increase in alignment error, it means the alignment is relatively unstable along that degree of freedom. The analysis of stability can thus be conducted by finding the eigenvalues of matrix . Any small eigenvalues indicate a lowconfidence alignment. In our paper, we analyze translational stabilities based on the eigenvalues corresponding to the three translations, ().
Comments
There are no comments yet.