Object-aware Monocular Depth Prediction with Instance Convolutions

With the advent of deep learning, estimating depth from a single RGB image has recently received a lot of attention, being capable of empowering many different applications ranging from path planning for robotics to computational cinematography. Nevertheless, while the depth maps are in their entirety fairly reliable, the estimates around object discontinuities are still far from satisfactory. This can be contributed to the fact that the convolutional operator naturally aggregates features across object discontinuities, resulting in smooth transitions rather than clear boundaries. Therefore, in order to circumvent this issue, we propose a novel convolutional operator which is explicitly tailored to avoid feature aggregation of different object parts. In particular, our method is based on estimating per-part depth values by means of superpixels. The proposed convolutional operator, which we dub "Instance Convolution", then only considers each object part individually on the basis of the estimated superpixels. Our evaluation with respect to the NYUv2 as well as the iBims dataset clearly demonstrates the superiority of Instance Convolutions over the classical convolution at estimating depth around occlusion boundaries, while producing comparable results elsewhere. Code will be made publicly available upon acceptance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 6

10/03/2019

3D Neighborhood Convolution: Learning Depth-Aware Features for RGB-D and RGB Semantic Segmentation

A key challenge for RGB-D segmentation is how to effectively incorporate...
12/10/2019

Learning Depth-Guided Convolutions for Monocular 3D Object Detection

3D object detection from a single image without LiDAR is a challenging t...
02/28/2020

Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields

Current methods for depth map prediction from monocular images tend to p...
06/01/2016

Deeper Depth Prediction with Fully Convolutional Residual Networks

This paper addresses the problem of estimating the depth map of a scene ...
03/11/2021

Unknown Object Segmentation from Stereo Images

Although instance-aware perception is a key prerequisite for many autono...
11/10/2015

TemplateNet for Depth-Based Object Instance Recognition

We present a novel deep architecture termed templateNet for depth based ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Estimating depth from a single RGB image is a very important field in research due to its wide range of applications in robotics and AR [almagro2019, tateno2017cvpr, Shih3DP20]. Nevertheless, predicting accurate depth from monocular input is also an inherently ill-posed problem. For the human perceptual system depth perception is a simpler task, as we heavily rely on prior knowledge from the environment. Similarly, deep learning has recently proven to be particularly suited for such problems, as the network is also capable of leveraging visual priors when making a prediction [saxena2005, hoiem2005].

Fig. 1:

The interpolation effect of classical convolutions induces smeared occlusion boundaries in the predicted depth map (here on an image taken from the iBims

[koch2019] dataset), as visible in (b), whereas the proposed Instance Convolution improves on this drawback (c). Note that, while this effect is not as evident when visualizing the 2D depth map, it gets clearly revealed once the depth map is back-projected in 3D.

With the rise of deep learning and increasing availability of appropriate and large datasets [silberman2012, koch2019, dai2017scannet, cityscapes2016, Ranftl2020], depth prediction from single images has recently made a huge leap forward in terms of robustness and accuracy [wang2020, Miangoleh2021Boosting, yin2019virtualnormal, jiao2018deeper, fu2018dorn]. Yet, despite those large improvements, they still often fall short of adequate quality for specific robotics applications, such as in path planning or robotic interventions where robots need to operate in hazardous environments with low-albedo surfaces and clutter [almagro2019, koch2019, ramamon2020]. One of the most limiting factor is the poor quality around object edges and surfaces, which directly effects the 3D perception towards failure, thereby resulting in a robot missing the objects. The predicted depth maps are typically blurry around object boundaries due to the nature of 2D convolutions and bilinear upsampling. Since the kernel aggregates features across object boundaries, the estimated depth map commonly ends up being an undesired interpolation between fore- and background. Similarly, associated 3D point clouds cannot reflect 3D structures (see Fig. 1). In this work, our motivation is to capture object-based depth values more sharply and completely, while preserving the global consistency with the rest of the scene.

To circumvent the smeared boundary problem, i.e. avoid undesired depth interpolation across different segments, we are interested in an operation that extracts features within a continuous object segment. To achieve this, we employ a novel convolution operation inspired by the sparse convolutions introduced by Uhrig et al.  [uhrig2017]

. Sparse convolutions are characterized by a mask, which defines the region in which the convolution operates. While sparse convolutions typically rely on a single mask throughout the entire image, in this work, the mask depends on the pixel location. Given a filter window, we define the mask as the feature region that belongs to the same segment as the central pixel of that window. In other words, only the pixels that belong to a certain object contribute to its feature extraction. We name our convolution operator

Instance Convolution.

Using Instance Convolutions to learn the object depth values should make the depth values at the object edges sharper than regular convolution, i.e. prevent the interpolation problem at the occlusion boundaries. Despite this advantage in terms of boundary sharpness, Instance Convolutions come with an obvious drawback. An architecture based solely on such operation, would not be able to capture object extent and global context. This inherent scale-distance ambiguity would thus make it impossible to obtain metric depth. Therefore, we propose an architecture that first extracts global features so to utilize scene priors via a common backbone comprised of regular convolutions. We then append a block composed of Instance Convolutions to rectify the features within an object segment, resulting in sharper depth across occlusion boundaries. Notice that we chose an optimization-based approach [achanta2012slic], producing superpixels, to obtain corresponding segmentation as a deep learning-driven method would simply shift the problem of clear boundaries towards the segmenter. In addition, our method does not require any semantic information, but rather only needs to understand what pixels belong to the same discontinuity-free object part.

Our contributions can be summarized as follows:

  • We propose a novel end-to-end method for depth estimation from monocular data, which explicitly enforces clear object boundaries by means of superpixels.

  • To this end we propose a dynamic convolutional operator, Instance Convolution, which only aggregates features appertaining to the same segment as the center pixel, with respect to the current kernel location.

  • Further, as we are required to properly propagate the correct segment information throughout the whole network, we additionally introduce the ”center-pooling” operator, keeping track of center pixel’s segment id.

We validate the usefulness of Instance Convolutions for edge-aware depth estimation on two commonly employed benchmarks, namely NYUv2 [silberman2012] and iBims [koch2019]. Thereby, we show that Instance Convolutions can consistently improve object boundaries regardless of the chosen backbone depth estimator.

Ii Related Work

Supervised monocular depth prediction

The first attempts to tackle monocular depth estimation where proposed by [saxena2005, saxena2009] via hand-engineered features and Markov Random Fields (MRF). Later, the advancements in deep learning established a new era for depth estimation, starting with Eigen et al.  [eigen2014]

. One of the main problems in learned depth regression occurs in the decoder part. Due to the successive layers of convolution channels in neural networks, fine details of the input images are lost. There are a number of works that approach this problem in different ways. Eigen & Fergus

[eigen2015] introduced multi-scale networks to make depth predictions at multiple scales. Laina et al. [laina2016] build upon a ResNet architecture with improved up-sampling blocks to reduce information loss in the decoding phase. Xu et al.  [xu2017]

proposed an approach that combines deep learning with conditional random fields (CRF), where CRFs are used to fuse the multi-scale deep features.

More recently, a line of works pursued multitask learning approaches that predict semantic or instance labels [jiao2018deeper], depth edges, and normals [ramamonjisoa2019sharpnet, zhang2019pap, lee2019big] to improve depth prediction. Kendall et al. [kendall2017multi]

investigated the effect of uncertainty estimation for estimating loss contributions in scene understanding tasks. Yin

et al. [yin2019virtualnormal] estimated the 3D point cloud from the predicted depth map and used a local surface normal loss.

Fig. 2: Exemplary depth prediction for the state-of-the-art method SharpNet [ramamonjisoa2019sharpnet]. Notice the smooth change in depth around the chair boundary (pink circle). Although SharpNet is particularly focusing on producing sharp boundaries, it still often struggles to do so, which we contribute to the use of standard 2D convolutions.

Occlusion boundaries

All of the above works aim to learn a globally consistent depth map, yet do not focus on fine local details, often resulting in blurred boundaries and deformed planar surfaces. Consistent with our work, Hu et al. [hu2018boundary] focuses on accurate object boundaries through gradient and normal based losses. Ramamonjisoa et al.  [ramamonjisoa2019sharpnet] aims to improve predicted depth boundaries by estimating normals and edges along with depth and establishing consensus between them. Several works apply bilateral filters to increase occlusion gaps [shih2020] or learn energy-based image-driven refinement focusing on edges [niklaus2019, ramamon2020].

Sparse convolutions

Sparsity in convolutions has been investigated in several works [liu2015sparseconv, wen2016nips] aimed at improving the efficiency of neural networks by reducing the number of parameters, i.e. increasing sparsity. Minkowski Engine was proposed as an efficient 3D Spatio-Temporal convolution built on sparse convolutions [choy2019minkowski]. In contrast to such works, Uhrig et al. used sparse meshes [uhrig2017] to improve structural understanding in the case of sparse inputs, e.g. depth map completion  [zhao2021adaptive, Lee_2021_CVPR]. Some works used a similar convolution operator, called partial or gated convolution, for image editing and inpainting tasks [liu2018partialinpainting, yu2019gatedconv]

to discard content-free regions. In our work, we are also interested in computing convolution only on a subset of input pixels. Differently from these works, our masks do not define a random set or a normally distributed sparse set of pixels. Our masks change dynamically for each pixel position, to extract features within the same

object segment.

Over-segmentation methods

The proposed instance convolution operation relies on the detection of meaningful object segments in a given scene. One alternative would be to identify all objects in a scene, either as annotated instance labels, or via learned segmentation models, e.g. Mask R-CNN [he2017maskrcnn]. The former requires a heavy amount of annotation work for large datasets. The latter requires the objects in the corresponding dataset to match the pre-trained models and can additionally lead to inaccurate edges. To detect objects without pre-trained models and labeled data, in this work, we leverage over-segmentation methods. Among the available methods for over-segmentation [achanta2012slic, uziel2019bayesian, levinshtein2009turbopixels], in our experiments we mainly focus on superpixel (SLIC) [achanta2012slic] and Bayesian adaptive superpixel segmentation (BASS) [uziel2019bayesian] (see Fig. 3).

Iii Methodology

In this section, the problem statement and the individual components of the proposed method for boundary-aware MDP are presented.

Iii-a Depth Estimation Using Deep Learning

Monocular depth estimation has recently received a lot of attention in literature and several different methods have been proposed [wang2020, Miangoleh2021Boosting, Ranftl2020]. Interestingly, even very early methods have noticed the performance degradation around occlusion boundaries, and various different measures, such as skip-connection [eigen2015, laina2016] or Conditional Random Fields [xu2017, liu2015], have been put in place to counteract the smearing effect. Nevertheless, despite those measures, the proposed methods can still not capture the high frequencies of object discontinuities due to the inherent nature of 2D convolutions.

Fig. 3: Different segmentation methods. Thereby, the top left shows the input RGB image, while the top right illustrates the ground truth segmentation. Notice how the ground truth segmentation does not consider intra-object discontinuities. Therefore, we leverage superpixels to account for any discontinuities based on the raw RGB input. Exemplary, we respectively demonstrate the obtained superpixels for SLIC [achanta2012slic] using 64 segments on the bottom left, and BASS [uziel2019bayesian] on the bottom right.
Fig. 4: a) Schematic overview of proposed architecture. The input image is fed into a state-of-the-art depth prediction model (e.g. SharpNet [ramamonjisoa2019sharpnet] or BTS[lee2019big]) to obtain global image features. The extracted depth features along with the object masks are then passed into the Instance Convolution block to predict a sharp depth map. b) Masking mechanism of Instance Convolutions. The abstract image on the left part contains a chair represented in green. The feature values of the masked region of the kernel are 0.3 and 0.6. The mask for the kernel shows the object pixels. If the normal convolution with a binary mask multiplication is used for this example, 0.9 can be obtained as the result, but with the Instance Convolution, the result is . The successive mask is calculated by taking the center value of the current kernel mask.

The classical convolution kernel simultaneously operates on all inputs within the kernel region, performing a weighted summation. Consequently, features originating from different object parts are simply fused, which in turn causes a blurring of the object boundaries, i.e. the edges that separate the object from the background in 3D. A corresponding example is shown in Fig. 2. Notice that this effect is more visible when viewing the associated 3D point cloud.

Iii-B Instance Convolutions for Boundary-Aware Depth Estimation

To avoid aggregation of features appertaining to different image layers, we thus propose to leverage superpixels in an effort to guide the convolution operator. In particular, inspired by Sparsity Invariant Convolutions [uhrig2017], we propose Instance Convolutions, which applies the weighted summation only to pixels belong to the same segment as the central pixel. Formally, this can be written as follows:

(1)

where denotes the observed feature at pixel , is the indicator function which returns if the segment id equals the segment id of the central pixel . Learnable convolution weight is denoted by with a kernel size , and bias . Finally, denotes a small constant, added to avoid division by zero for non-masked pixels. Notice that if all pixels belong to the same segment, this operation turns into a regular convolution.

Since our architecture follows the standard encoder-decoder methodology (Section III-C), we have to adequately propagate the segment information through the network. However, as MaxPooling can lead to loss of spatial information, we introduce the center-pooling operator, which simply forwards the segment id of the central pixel with respect to each downsampling operation to preserve the object boundaries. Whereas at upsampling, the original semantic map (or downsampled from previous layers) is directly used as they are already readily available or computed. For a detailed explanation of Instance Convolutions, see Fig. 4 (b).

A deep learning-based approach would simply transfer the problem of clear boundaries towards the segmenter. Such methods can also never cope with all objects classes in the wild. Moreover, our method does not require any semantic information, but rather only needs to understand what pixels belong to the same discontinuity-free object part. Hence, in this work, we instead rely on optimization-based approaches, i.e. SLIC [achanta2012slic] and BASS [uziel2019bayesian], to obtain the needed superpixels. Thereby, these works can provide not only object boundaries but also self-occlusions within objects (see Fig. 3). Noteworthy, while SLIC requires to define the number of output segments in advance, BASS can find the optimal number of segments by itself, however at a higher computational cost.

Iii-C End-to-end Architecture

To summarize, we model our Instance Convolution such that the method is particularly suited for estimating depth at object boundaries. Nevertheless, as an output pixel has never observed any feature outside of the segment it resides on, it is impossible for the model to predict metric depth due to the scale-distance ambiguity (i.e. a large object far away can have the same projection onto the image plane as a small object close by). Therefore, we harness a state-of-the-art MDP backbone to extract global information about the scene. We then feed the extracted features together with the obtained superpixels to our Instance Convolution-driven network to estimate the final edge-aware depth. Since the backbone as well as our Instance Convolution block are fully differentiable, we can train the whole model end-to-end. Proposed method can be plugged together with different depth predictors. In this paper, we use SharpNet[ramamonjisoa2019sharpnet], BTS[lee2019big], and VNL[yin2019virtualnormal] for feature extraction, to show the generalizability of Instance Convolutions.

Iii-D Training Loss

Our training objective is composed of three terms, namely the gradient loss, the normal loss, and the loss. The depth gradient loss is given as

(2)

and calculates the horizontal () and vertical gradients () using the Sobel operator. Here denotes the predicted depth, while refers to the ground truth depth.

A surface normal of a pixel can be computed directly by vertical and horizontal gradients of the depth map as

(3)

We then define the normal loss as follows

(4)

which computes the angular distance between the per-pixel normals extracted from ground truth and the predicted depth maps.

The last loss is a -term between the predicted depth map and the ground truth depth map

(5)

The total training loss is the sum of the three losses as follows

(6)

Iv Experiments

In this section, the experimental setup along with the proof-of-concept experiment is introduced.

Iv-a Overfitting Experiment

To test the capacity and prove the effectiveness of our method, we first conduct overfitting experiments, comparing the classical convolution based MDP methods and Instance Convolution counterparts. In general, as classical convolution is limited by the kernel-window pixel averaging, it cannot learn the sharp occlusion boundaries of an input image and results in a structured noise effect on point clouds (see Fig. 1). Yet, the Instance Convolution method can prevent this issue through considering the pixels only relevant to the objects.

Iv-B Evaluation Criteria

Standard metrics. We follow the standard MDP metrics as introduced in [eigen2014] and report results with respect to mean absolute relative error (absrel), root mean squared error (rmse), and the accuracy under threshold ().

Occlusion boundaries. In addition, Koch et al. [koch2019] proposed another set of metrics focusing on occlusion boundaries (DBE, DDE) and planarity (PE). The former calculates the accuracy () and completeness () of occlusion boundaries by comparing predicted depth map edges with an annotated map of occlusion boundaries, calculating the Truncated Chamfer Distance (TCD) according to

(7)

where is the distance to the nearest pixel of the ground truth edge, . If this distance is greater than 10 pixels, is set to zero to neglect irrelevant pixels. Furthermore, while PE denotes the surface normal error on planar region maps (provided with iBims) computed as 3D distance and angular difference, DDE is the directed depth error assessing depth behind and in front of planar regions. In this work, we focus on occlusion boundary quality, therefore mainly consider DBE along with the standard metrics. We also state the results for PE and DDE for completeness.

Iv-C Datasets

NYU v2 Dataset. NYU v2 consists of images collected in a real indoor environments ([silberman2012]). Depth values are captured with Microsoft Kinect camera. The raw dataset of RGB depth pairs (approximately 120K images) has no semantic labels. The authors created a smaller split for semantic labels, instance labels, along with the refined depths and normals. In our experiments, we refer to this smaller split, which contains 1449 images in total, which are divided into 795 for training and 654 for testing.

NYU v2 - OB Dataset. Occlusion boundary annotations on the NYU v2 test data for evaluation purposes is released by Ramamonjisoa et al. [ramamon2020]. In this work, we use this dataset to further evaluate the occlusion boundaries of our depth predictions.

iBims Dataset. This dataset is presented as an evaluation split along with novel metrics on occlusion boundaries and planarity scores. They provide rich annotations of dense depth maps from different scenes, with occlusion boundaries and planar regions. iBims contains around 100 images for evaluation only [koch2019].

Method Error Accuracy DBE
SharpNet [ramamonjisoa2019sharpnet] 0.116 0.448 0.853 0.970 0.993 3.041 8.692
with Instance Conv. 0.124 0.456 0.847 0.971 0.993 1.961 6.489
VNL [yin2019virtualnormal] 0.112 0.417 0.880 0.975 0.994 1.854 7.188
with Instance Conv. 0.117 0.425 0.863 0.970 0.991 1.780 6.059
BTS [lee2019big] 0.110 0.392 0.885 0.978 0.994 2.090 5.820
with Instance Conv. 0.121 0.467 0.848 0.964 0.993 1.817 6.197
TABLE I: NYU v2 Test Split for state-of-the-art comparison with and without Instance Convolution. Three models selected as backbones , integrated within the proposed architecture in Fig. 4.

max width= Method Standard Metrics PE (in ) DBE (in ) DDE (in ) Eigen [eigen2014] 0.32 0.17 1.55 0.36 0.65 0.84 7.70 24.91 9.97 9.99 70.37 27.42 2.22 Eigen (AlexNet) [eigen2015] 0.30 0.15 1.38 0.40 0.73 0.88 7.52 21.50 4.66 8.68 77.48 18.93 3.59 Eigen (VGG) [eigen2015] 0.25 0.13 1.26 0.47 0.78 0.93 5.97 17.65 4.05 8.01 79.88 18.72 1.41 Laina [laina2016] 0.26 0.13 1.20 0.50 0.78 0.91 6.46 19.13 6.19 9.17 81.02 17.01 1.97 Liu [liu2015] 0.30 0.13 1.26 0.48 0.78 0.91 8.45 28.69 2.42 7.11 79.70 14.16 6.14 Li [li2017] 0.22 0.11 1.09 0.58 0.85 0.94 7.82 22.20 3.90 8.17 83.71 13.20 3.09 Liu [liu2018planenet] 0.29 0.17 1.45 0.41 0.70 0.86 7.26 17.24 4.84 8.86 71.24 28.36 0.40 SharpNet [ramamonjisoa2019sharpnet] 0.26 0.11 1.07 0.59 0.84 0.94 9.95 25.67 3.52 7.61 84.03 9.48 6.49 with Instance Conv. 0.29 0.12 1.14 0.55 0.82 0.92 9.83 25.88 3.11 7.83 81.84 8.27 9.88 BTS [lee2019big] 0.24 0.12 1.08 0.53 0.84 0.94 7.24 20.51 2.50 5.81 82.24 15.50 2.27 with Instance Conv. 0.22 0.11 1.11 0.57 0.86 0.94 6.76 19.39 3.71 8.01 84.04 13.3 2.67 VNL [yin2019virtualnormal] 0.24 0.11 1.06 0.54 0.84 0.93 5.73 16.91 3.64 7.06 82.72 13.91 3.36 with Instance Conv. 0.23 0.10 1.06 0.58 0.85 0.93 5.62 16.53 3.03 7.68 83.85 13.26 2.87

TABLE II: iBIMS dataset, quantitative results for standard metrics and PE, DBE, and DDE metrics on dataset applying different MDP methods.

Iv-D Comparison to State-of-The-Art

In Table I, we compare our results from NYU v2 with three state-of-the-art approaches, namely SharpNet [ramamonjisoa2019sharpnet], VNL [yin2019virtualnormal] and BTS [lee2019big]. Thereby, the proposed architecture (Fig. 4) uses these pre-trained models for latent depth feature extraction and applies Instance Convolution based blocks.

All models are trained using PyTorch on a NVidia RTX 2080Ti for 35 epochs with Adam. We further set the learning rate to

and decrease it by every 10 epochs. The loss terms in Eq. 6 have equal weights of 1. We use SLIC [achanta2012slic] to obtain superpixels with 64 segments and set sigma to 1. For SharpNet [ramamonjisoa2019sharpnet] we train each model with a batchsize of 4. For BTS [lee2019big] we set the batch size to 3. Our proposed model employs 3 layers of Instance Convolutions with gradually decreasing number of feature channels. The feature map resolution remains constant with a prediction kernel of size .

We can clearly outperform the original methods with respect to the occlusion boundary metrics and . In addition, for the classical metrics (absrel and rmse), we report comparable results as the baselines. The qualitative results in the Fig. 5 further supports these findings, where the proposed Instance Convolution based predictions of each model have sharper occlusion boundaries, and resulting depth maps.

RGB

SEGMENTS

GT

SHARPNET

PROPOSED

BTS

PROPOSED

VNL

PROPOSED

Fig. 5: Qualitative results on the NYU v2 test split. The rows respectively show the RGB input image, SLIC output for the input image, ground truth depth, and depth maps from original SharpNet [ramamonjisoa2019sharpnet], BTS [lee2019big] and VNL [yin2019virtualnormal] models, and the improved depth maps from proposed Instance Convolution models.

In Table II, we further report our results with respect to the iBims evaluation dataset, in order to assess the generalizability of our method. Note that this dataset is used only for inference (i.e. no training), to measure whether the models are capable of detecting depth values. Here, our model again improved the counterpart depth models in terms of DBE. VNL backbone model obtains state-of-the-art results for DBE accuracy and completion metrics. As for SharpNet, our model have better results than the original model.

Notably on iBims, the model with the best absrel value does not have the best DBE score (Li et al.  [li2017] 0.22 absrel, 3.90 DBE vs. Liu et al.  [liu2015] 0.30 absrel, 2.42 DBE). This conceptually agrees with the fact that absrel averages out per-pixel distances, while DBE calculates 3D distances between the points lying on occlusion gaps. Further, BTS achieves an absrel error of 0.22, sharing the state-of-the-art on iBims with Li et al.  [li2017], while outperforming them on DBE. The reason of DBE degradation on BTS could reside in the atrous convolutions used in the backbone, resulting in losing edge information and thus the generalization ability.

Iv-E Ablation Studies

Table III contains the results for different parameters and configurations. For all experiments, the SharpNet model is used as backbone, with Instance Convolutions (IC) or Standard Convolutions (SC).

Superpixels information. We trained a model with SC, but provided the superpixel segmentation map as an extra input to each convolutional layer. DBE scores improved (SC 64), but the results are worse than IC.

Number of segments in SLIC. We ablate different number of superpixel segments. As expected, increasing number of segments improve DBE accuracy, yet, induce a little loss in absrel. Notice that this is also the case for most sota works. Best performing works for edges are often worse on absrel, which could be caused by imperfect annotations.

Over segmentation with BASS. We also qualitatively evaluated our methods using BASS [uziel2019bayesian] to extract superpixels. As shown in the Fig. 3, BASS is able to retrieve more detailed segments from the image, however it also detects overly noisy edges due to redundant number of segments (400-500), which increases the model complexity, making the learning more difficult.

Instance masks. To compare the quality of instance mask prediction to unsupervised segmentation, we ablated our method with the state-of-the-art instance mask prediction method PointRend [kirillov2020pointrend]. As both the absrel and the DBE results were poorer than the baseline, this proved the effectiveness of over-segmentation method, most likely because of detecting self-occlusions within the images.

Runtime analysis Full inference times are given in the Tbl. III under Frames per Second (FPS). Each runtime is an average of 1000 inferences. It can be seen that IC does not excessively alter the FPS (compared to both original SharpNet and SC). As PointRend and BASS rely on external neural networks, we do not consider them in comparisons.

Method Error DBE
FPS
Baseline 0.12 0.45 3.04 8.69 16.7
PointRend [kirillov2020pointrend] 0.13 0.45 2.21 6.76 -
BASS [uziel2019bayesian] 0.12 0.46 2.96 7.23 -
IC 16 0.14 0.47 2.07 6.59 13.5
IC 32 0.14 0.47 2.09 6.66 13.6
IC 64 0.12 0.46 1.96 6.48 13.4
SC 64 0.12 0.45 2.18 6.63 15.2
IC 128 0.13 0.46 1.92 6.57 13.3
TABLE III: NYU v2 Test Split Comparison for ablation study.

V Conclusions

In this work, we introduce a novel depth estimation method, which is particularly tailored towards tackling the problem of depth smoothing at object-discontinuities. To this end, we propose a new convolutional operator, which avoids feature aggregation across discontinuities by means of superpixels. Our exhaustive evaluation on NYU v2 as well as iBims demonstrates that proposed method is indeed capable of enhancing depth prediction around edges, while almost completely maintaining the quality on the remaining regions. In the future, we want to explore how Instance Convolution can be incorporated into other domains such as semantic segmentation to similarly improve sharpness.

References