SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception

12/13/2018 ∙ by Yue Meng, et al. ∙ Toyota InfoTechnology Center Co., Ltd. University of California, San Diego 10

Unsupervised learning for visual perception of 3D geometry is of great interest to autonomous systems. Recent works on unsupervised learning have made considerable progress on geometry perception; however, they perform poorly on dynamic objects and scenarios with dark and noisy environments. In contrast, supervised learning algorithms, which are robust, require large labeled geometric data-set. This paper introduces SIGNet, a novel framework that provides robust geometry perception without requiring geometrically informative labels. Specifically, SIGNet integrates semantic information to make unsupervised robust geometric predictions for dynamic objects in low lighting and noisy environments. SIGNet is shown to improve upon the state of art unsupervised learning for geometry perception by 30 for depth prediction). In particular, SIGNet improves the dynamic object class performance by 39

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

page 11

page 12

page 13

page 14

page 15

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Visual perception of 3D scene geometry using a monocular camera is a fundamental problem with numerous applications, including autonomous driving. We focus on the ability to infer accurate geometry (depth and motion) of static and moving objects in a 3D scene. Supervised deep learning models have been proposed for geometry predictions, yielding “robust” and favorable results against the traditional approaches (SfM)

[38, 39, 10, 2, 1, 26]. However, supervised models require a data-set labeled with geometrically informative annotations, which is extremely challenging as the collection of geometrically annotated ground truth (e.g. depth, motion) requires expensive equipment (e.g. LIDAR) and careful calibration procedures.

Recent works combine the geometric-based SfM methods with end-to-end unsupervised trainable deep models to utilize abundantly available unlabeled monocular camera data. In [54, 41, 51, 9] deep models predict depth and motion per pixel simultaneously from a short sequence of images and typically use photo-metric reconstruction loss of a target scene from neighboring scenes as the surrogate task. However, these solutions fail often when dealing with dynamic objects111Section 5 presents empirical results that explicitly illustrate this shortcoming of state-of-the-art unsupervised approaches.. Furthermore, the prediction quality is negatively affected by the imperfections like dynamic objects, Lambertian reflectance, and varying intensity, all of these occur in the real world. In short, no robust solution is known.

Figure 1: On the right, state-of-the-art unsupervised learning approach relies on pixel-wise information only, while SIGNet on the left utilizes the semantic information to encode the spatial constraints hence further enhances the geometry prediction.

In Fig 1, we highlight the challenges that unsupervised learning faces for dynamic objects. Unsupervised model learns the movement of every pixel in a sequence of images and by taking ego-motion into consideration produces a single feedback (i.e. photo-metric reconstruction loss) to make depth and motion predictions. Therefore, it aims to learn both depth and motion per pixel from a single feedback. SIGNet relies on the key observation that inherent spatial constraints exist in the visual perception problem as shown in Fig 1. Specifically, we exploit the fact that pixels belonging to the same object have additional inherent constraints on the depth and motion profile.

SIGNet encodes the spatial constraints to improve geometric perception without requiring geometrically informative labels. How can the spatial constraints of the pixels be encoded? We leverage the semantic information as seen in Fig 1 for unsupervised frameworks. Intuitively, semantic information can be interpreted as defining boundaries around a group of pixels whose geometry is closely related. The knowledge of semantic information between different segments of a scene could allow us to easily learn which pixels are correlated, while the object edges could imply sharp depth transition. Furthermore, note that this learning paradigm is practical 222Semantic labels can be easily curated on demand on unlabeled data. On the contrary, geometrically informative labels such as motion and depth require additional sensors and careful annotation at the data collection stage.

as annotations for semantic prediction tasks such as semantic segmentation are relatively cheaper and easier to acquire. To the best of our knowledge, our work is the first to utilize semantic information in the context of unsupervised learning for geometry prediction (depth and motion) estimation.

A natural question is how do we combine semantic information with an unsupervised geometric prediction? Our approach to combine the semantic information with RGB input is two-fold: First, we propose a novel way to augment RGB images with semantic information. Second, we propose new loss functions, architecture, and training method. The two-fold approach precisely accounts for spatial constraints in making geometric predictions:

Feature Augmentation: We concatenate the RGB input data with both per-pixel class predictions and instance-level predictions. We use per pixel class predictions to define semantic mask which serves as a guidance signal that eases unsupervised geometric predictions. Moreover, we use the instance-level prediction and split them into two inputs, instance edges and object masks. Instance edges and object masks enable the network to learn the object edges and sharp depth transitions.

Loss Function Augmentation: Second, we augment the loss function to include various semantic losses, which reduces the reliance on semantic features in the evaluation phase. This is crucial when the environment contains less common contextual elements (like in dessert navigation or mining exploitation). We design and experiment with various semantic losses, such as semantic warp loss, masked reconstruction loss, and semantic-aware edge smoothness loss. However, manually designing a loss term which can improve the performance over the feature augmentation technique turns out to be very difficult. The challenge comes from the lack of understanding of error distributions because we are generally biased towards simple, interpretable loss functions that can be sub-optimal in unsupervised learning. Hence, we propose an alternative approach of incorporating a transfer network that learns how to predict semantic mask via a semantic reconstruction loss and provides feedback to improve the depth and pose estimations, which shows considerable improvements in depth and flow prediction.

We empirically evaluate the feature and loss function augmentations on KITTI dataset [14] and compare them with the state-of-the-art unsupervised learning framework [51]. In our experiments we use class-level predictions from DeepLabv3+ [4] trained on Cityscapes [6] and Mask R-CNN [18] trained on MSCOCO [27]. Our key findings:

  • By using semantic segmentation for both feature and loss augmentation, our proposed algorithms improves squared relative error in depth estimation by % compared to the strong baseline set by state-of-the-art unsupervised GeoNet [51].

  • Feature augmentation alone, combining semantic with instance-level information, leads to larger gains. With both class-level and instance-level features, the squared relative error of the depth predictions improves by % compared to the baseline.

  • Finally, as for common dynamic object classes (e.g. vehicles) SIGNet shows % improvement (in squared relative error) for depth predictions and % improvement in the flow prediction, thereby showing that semantic information is very useful for improving the performance in the dynamic categories of objects. Furthermore, SIGNet is robust to noise in image intensity compared to the baseline.

2 Related Work

Deep Models for Understanding Geometry: Deep models have been widely used in supervised depth estimation [8, 29, 36, 53, 5, 49, 50, 11, 46], tracking, and pose estimation [43, 47, 2, 17] , as well as optical flow predictions [7, 20, 25, 40]. These models have demonstrated superior accuracy and typically faster speed in modern hardware platforms (especially in the case of optical flow estimation) compared to traditional methods. However, achieving good performance with supervised learning requires a large amount of geometry-related labels. The current work addresses this challenge by adopting an unsupervised learning framework for depth, pose, and optical flow estimations.

Deep Models for Semantic Predictions: Deep models are widely applied in semantic prediction tasks, such as image classification [24], semantic segmentation [4], and instance segmentation [18]. In this work, we utilize the effectiveness of the semantic predictions provided by DeepLab v3+ [4] and Mask R-CNN [18] in encoding spatial constraints to accurately predict geometric attributes such as depth and flow. We note that while we particularly, utilize DeepLab v3+ [4] and Mask R-CNN [18], similar gains can be obtained by using any state-of-the-art semantic prediction system.

Unsupervised Deep Models for Understanding Geometry: Several recent methods propose to use unsupervised learning for geometry understanding. In particular, Garg et al. [13] uses a warping method based on Taylor expansion. In the context of unsupervised flow prediction, Yu et al. [21] and Ren et al. [37] introduce image reconstruction loss with spatial smoothness constraints. Similar methods are used in Zhou et al. [54] for learning depth and camera ego-motions by ignoring object motions. This is partially addressed by Vijayanarasimhan et al. [41], despite the fact, we note, that the modeling of motion is difficult without introducing semantic information. This framework is further improved with better modeling of the geometry. Geometric consistency loss is introduced to handle occluded regions, in binocular depth learning [16], flow prediction [32] and joint depth, ego-motion and optical flow learning [51]. Mahjourian et al. [31] focuses on improved geometric constraints, Godard et al. [15] proposes several architectural and loss innovations, while Zhan et al. [52] uses reconstruction in the feature space rather than the image space. In contrast, the current work explores using semantic information to resolve ambiguities that are difficult for pure geometric modeling. Methods proposed in the current work are complementary to these recent methods, but we choose to validate our approach on the state-of-the-art GeoNet framework [51].

Multi-Task Learning for Semantic and Depth: Multi-task learning [3] achieves better generalization by allowing the system to learn features that are robust across different tasks. Recent methods focus on designing efficient architectures that can predict related tasks using shared features while avoiding negative transfers [35, 19, 30, 34, 23, 12]. In this context, several prior works report promising results combining scene geometry with semantics. For instance, similar to our method Liu et al. [28] uses semantic predictions to provide depth. However, this work is fully supervised and only uses sub-optimal traditional methods. Wang et al. [44], Cross-Stitching [35], UberNet [23] and NDDR-CNN [12]

all report improved performance over single-task baselines. But they did not address outdoor scenes and unsupervised geometry understanding. Our work is also related to PAD-Net

[48]. PAD-Net reports improvements by combining intermediate tasks as inputs to final depth and segmentation tasks. Our method of using semantic input similarly introduces an intermediate prediction task as input to the depth and pose predictions, but we tackle the problem setting where depth labels are not provided.

3 State-of-the-art Unsupervised Geometry Prediction

Figure 2: Our unsupervised architecture contains DepthNet, PoseNet and ResFlowNet to predict depth, poses and motion using semantic-level and instance-level segmentation concatenated along the input channel dimension.

Prior to presenting our technical approach, we provide a brief overview of state-of-the-art unsupervised depth and motion estimation framework, which is based on image reconstruction from geometric predictions [54, 51]. It trains the geometric prediction models through the reconstructions of a target image from source images. The target and source images are neighboring frames in a video sequence. Note that such a reconstruction is possible only when certain elements of the 3D geometry of the scene are understood: (1) The relative 3D location (and thus the distance) between the camera and each pixel. (2) The camera ego-motion. (3) The motion of pixels. Thus this framework can be used to train a depth estimator and an ego-motion estimator, as well as a optical flow predictor.

Technically, each training sample consists of contiguous video frames where the center frame is the “target frame” and the other frames serve as the “source frame”. In training, a differentiable warping function is constructed from the geometry predictions. The warping function is used to reconstruct the target frame from source frame

via bilinear sampling. The level of success in this reconstruction provides training signals through backpropagation to the various ConvNets in the system. A standard loss function to measure reconstruction success is as follows:

(1)

where SSIM denotes the structural similarity index [45] and is typically set to .

To filter out erroneous predictions while preserving sharp details, the standard practice is to include an edge-aware depth smoothness loss weighted by image gradients

(2)

where denotes element-wise absolute operation,

is the vector differential operator, and

denotes transpose of gradients. These losses are usually computed from a pyramid of multi-scale predictions. The sum is used as the training target.

While the reconstruction of RGB images is an effective surrogate task for unsupervised learning, it is limited by the lack of semantic information as supervision signals. For example, the system cannot learn the difference between the car and the road if they have similar colors or two neighboring cars with similar colors. When object motion is considered in the models, the learning can mistakenly assign motion to non-moving objects as the geometric constraints are ill-posed. We augment and improve this system by leveraging semantic information.

4 Methods

In this section, we present solutions to enhance geometry predictions with semantic information. Semantic labels can provide rich information on 3D scene geometry. Important details such as 3D location of pixels and their movements can be inferred from a dense representation of the scene semantics. The proposed methods are applicable to a wide variety of recently proposed unsupervised geometry learning frameworks based on photometric reconstruction [54, 16, 51] represented by our baseline framework introduced in Section 3. Our complemented pipeline in test time is illustrated in Fig 2.

Figure 3: Top to bottom: RGB image, semantic segmentation, instance class segmentation and instance edge map. They are used for the full prediction architecture. The semantic segmentation provides accurate segments grouped by classes, but it fails to differentiate neighboring cars.

4.1 Semantic Input Augmentation

Semantic predictions can improve geometry prediction models by being used as input features. Unlike RGB images, semantic predictions mark objects and contiguous structures with consistent blobs, which provide important information for the learning problem. However, it is uncertain that using semantic labels as input could indeed improve depth and motion predictions since training labels are not available. Semantic information could be lost or distorted, which would end up being a noisy training signal. An important finding of our work is using semantic prediction as inputs significantly improves the accuracy in geometry prediction, despite the presence of noisy training signal. Input representation and the type of semantic labels have a large impact on the performance of the system. We further illustrate this by Fig 3, where we show various semantic labels (semantic segmentation, instance segmentation, and instance edge) that we use to augment the input. This imposes additional constraint such as depth of the pixels belonging to a particular object (e.g. a vehicle) which helps the learning process. Furthermore, sudden changes in the depth predictions can be inferred from the boundary of vehicles. The semantic labels of the pixels can provide important information to associate pixels across frames.

Encoding Pixel-wise Class Labels:

We explored two input encoding techniques for class labels: dense encoding and one-hot encoding. In dense encoding, dense class labels are concatenated along the input channel dimension. The added semantic features are centralized to the range of

to be consistent with RGB inputs. In the case of one-hot encoding, the class-level semantic predictions are first expanded to one-hot encoding and then concatenated along the input channel dimension. The labels are represented as one-hot sparse vectors. In this variant, semantic features are not normalized since they have similar value range as the RGB inputs,

Encoding Instance-level Semantic Information: Both dense and one-hot encoding are natural to class-level semantic prediction, where each pixel is only assigned a class label rather than an instance label. Our conjecture is that instance-level semantic information is particular well-suited to improve unsupervised geometric predictions, as it provides accurate information on the boundary between individual objects of the same type. Unlike class-level label, instance label in itself does not have well-defined meaning. Across different frames, the same label could refer to different object instances. To efficiently represent the instance-level information, we compute the gradient map of a dense instance map and use it as an additional feature channel, along the class label input (dense or one-hot encoding).

Direct Input versus Residual Correction: Complementary to the choice of encoding, we also experiment with different methods to input semantic information to the geometry prediction models. In particular, we make a residual prediction using a separate branch that takes in only semantic inputs. Notably, using residual depth prediction leads to further improvement on top of the gains from direct input methods.

4.2 Semantic Guided Loss Functions

The information from semantic predictions could be diminished due to noisy semantic labels and very deep architectures. Hence, we design training loss functions that are guided by semantic information. In such design, the semantic predictions provide additional loss constraints to the network. In this subsection, we introduce a set of semantic guided loss functions designed to improve the learning of depth and motion prediction.

Semantic Warp Loss: Semantic predictions can help learn scenarios where reconstruction of the RGB image is correct in terms of pixel values but violates obvious semantic correspondences, e.g. matching pixels to incorrect semantic classes and/or instances. In light of this, we propose to reconstruct the semantic predictions in addition of doing so for RGB images. We call this “semantic warping loss” as it is based on warping of the semantic predictions from source frames to the target frame. Let be the source frame semantic prediction and be the warped semantic image, we define semantic warp loss as:

(3)

The warped loss is added to the baseline framework using a hyper-tuned value of the weight .

Masking of Reconstruction Loss via Semantics: As described in Section 3, ambiguity in object motion can lead to sub-optimal learning. Semantic labels can partially resolve this ambiguity by separating non-moving objects. Motivated by this observation, we mask the foreground region out to form a set of new images where represents the RGB-channel index, is the element-wise multiplication for matrix and is the -th channel of the binary semantic segmentation result. Similarly we can obtain

. The image similarity loss is defined as:

(4)
Figure 4: Predict semantic labels from depth predictions. The transfer function uses RGB and predicted depth as input. We experiment the variant with and without semantic input.

Semantic-Aware Edge Smoothness Loss: Equation 2 uses RGB to infer edge locations when enforcing smooth regions of depth. This could be improved by including an edge map computed from semantic predictions. Given a semantic segmentation result , we define a weight matrix where the weight is low (close to zero) on class boundary regions and high (close to one) on other regions. We propose a new image similarity loss as:

(5)

Semantic Loss by Transfer Network: Motivated by the observation that high-quality depth maps usually depict object classes and background region, we designed a novel transfer network architecture. As shown in Fig 4 the transfer network block receives predicted depth maps along with the original RGB images and outputs semantic labels. The transfer network introduces a semantic reconstruction loss term to the objective function to force the predicted depth maps to be richer in contextual sense, hence refine the depth estimation. For implementation, we choose the ResNet50 as the backbone and alter the dimensions for the input and output convolutional layers to be consistent with the segmentation task. The network generates one-hot encoded heatmaps and use cross-entropy as the semantic similarity measure.

5 Experiments

To quantify the benefits that semantic information brings to geometry-based learning, we designed experiments based on GeoNet framework [51]. First, we showed our model’s depth prediction performance on KITTI dataset [14], which outperformed state-of-the-art unsupervised and supervised models. Then we designed ablation studies to analyze each individual component’s contribution. Finally, we presented improvements in motion predictions and revisited the performance gain using category-specific evaluation.

5.1 Implementation Details

To make a fair comparison with state-of-the-art models [8, 54, 51], we divided KITTI 2015 dataset into train set (40238 images) and test set (697 images) according to split rules from Eigen et al [8]. We utilized DeepLabv3+ [4] to obtain semantic segmentation results and Mask-RCNN [18] for instance-level segmentation. Similar to the hyper-parameter settings in [51], we used Adam optimizer [22] with initial learning rate as 2e-4, set batch size to 4 and trained our modified DepthNet and PoseNet modules for 250000 iterations with random shuffling and data augmentation. The training took 10 hours on two GTX1080Ti.

5.2 Monocular Depth Evaluation on KITTI

We augmented the image sequences with corresponding semantic and instance segmentation sequences and adopted the scale normalization suggested in [42]. In the evaluation stage, the ground truth depth maps were generated by projecting 3D Velodyne LiDAR points to the image plane. Followed by [51], we clipped our depth predictions within 0.001m to 80m and calibrated the scale by the medium number of the ground truth. The evaluation results are shown in Table 1, where all the metrics are introduced in [8]. Our model benefits significantly from feature augmentation and surpasses the state-of-the-art methods substantially in both supervised and unsupervised fields.

Moreover, we found a correlation between the improvement region and object classes. We visualized the absolute relative error (AbsRel) among image plane from our model and from the baseline. As shown in Fig 5, most of the improvements come from regions containing objects. This indicates that the network is able to learn the concept of objects to improve the depth prediction by rendering extra semantic information.

Figure 5:

Test evaluation on KITTI Eigen split. Top to bottom: Input RGB image, AbsRel error map of baseline, AbsRel error map of ours, and improvements of ours on AbsRel map compared to baseline. The ground truth is interpolated to enhance visualization. Darker color corresponds to smaller errors on the error maps.

Method Supervised Error-related metrics Accuracy-related metrics
Abs Rel Sq Rel RSME RSME log
Eigen et al. [8] Coarse Depth 0.214 1.605 6.653 0.292 0.673 0.884 0.957
Eigen et al. [8] Fine Depth 0.203 1.548 6.307 0.282 0.702 0.890 0.957
Liu et al. [29] Depth 0.202 1.614 6.523 0.275 0.678 0.895 0.965
Godard et al. [16] Pose 0.148 1.344 5.927 0.247 0.803 0.922 0.964
Zhou et al. [54] updated No 0.183 1.595 6.709 0.270 0.734 0.902 0.959
Yin et al. [51] No 0.155 1.296 5.857 0.233 0.793 0.931 0.973
Ours No 0.133 0.905 5.181 0.208 0.825 0.947 0.981
(improved by) 14.04% 30.19% 11.55% 10.85% 3.14% 1.53% 0.80%
Table 1: Monocular depth results on KITTI 2015 [33] by the split of Eigen et al. [8]

5.3 Ablation Studies

Here we took a deeper look of our model, testified its robustness under noise from observations, and presented variations of our framework to show promising explorations for future researchers. In the following experiments, we kept all the other parameters the same in [51] and applied the same training/evaluation strategies mentioned in Section 5.2

How much gain from various feature augmentation?
We tried out different combinations and forms of semantic/instance-level inputs based on “Yin et al[51] with scaled normalization. From Table 2

, our first conclusion is that any meaningful form of extra inputs can ameliorate the model, which is straightforward. Secondly, when we use “Semantic” and “Instance class” for feature augmentation, one-hot encoded form tends to behave better than dense map form. Conceivably one-hot encoding stores richer information in its structural formation, whereas dense map only contains discrete labels which may be more difficult for learning. Moreover, using both “Semantic” and “Instance class” can provide further gain, possibly due to the different label distributions from two datasets. Labels from Cityscape cover both background and foreground concepts, while the COCO dataset focuses more on objects. At last, when we combined one-hot encoded “Semantic” and “Instance class” information along with “Instance id” edge features, the network exploited the most from scene understanding hence greatly enhanced the performance.

Semantic Instance Instance Error-related metrics Accuracy-related metrics
class id Abs Rel Sq Rel RSME RSME log
0.149 1.060 5.567 0.226 0.796 0.935 0.975
Dense 0.142 0.991 5.309 0.216 0.814 0.943 0.980
One-hot 0.139 0.949 5.227 0.214 0.818 0.945 0.980
Dense 0.142 0.986 5.325 0.218 0.812 0.943 0.978
One-hot 0.141 0.976 5.272 0.215 0.811 0.942 0.979
Edge 0.145 1.037 5.314 0.217 0.807 0.943 0.978
Dense Edge 0.142 0.969 5.447 0.219 0.808 0.941 0.978
One-hot One-hot Edge 0.133 0.905 5.181 0.208 0.825 0.947 0.981
Table 2: Ablation study on depth prediction performance gains due to different semantic sources and forms.

Can our model handle noise?
To testify our model’s robustness for varied lighting condition, we multiplied a scalar between 0 and 1 to RGB inputs in the evaluation. Fig 6 showed that our model still holds equal performance to GeoNet when the intensity decrease to %.

(a) Observations under decreased light condition (left to right)
(b) Robustness under decreased light condition
Figure 6: The abs errs change as lighting condition drops. Our model can still be better than baseline even if the lighting intensity drop to 0.40 of the original ones.

Which module needs extra information the most?
We fed semantics to only DepthNet or PoseNet to see the difference in their performance gain. From Table 3 we can see that compared to DepthNet, PoseNet learns little from the semantics to help depth prediction. Therefore we tried to feed the semantics to a new PoseNet with the same structure as the original one and compute the predicted poses by taking the sum from two different PoseNets, which led to performance gain; however, performance gain was not observed from applying the same method to DepthNet.

DepthNet PoseNet Error-related metrics Accuracy-related metrics
Abs Rel Sq Rel RSME RSME log
0.149 1.060 5.567 0.226 0.796 0.935 0.975
Channel 0.145 0.957 5.291 0.216 0.805 0.943 0.980
Channel 0.147 1.076 5.385 0.223 0.808 0.938 0.975
Channel Channel 0.139 0.949 5.227 0.214 0.818 0.945 0.980
Extra Net Channel 0.147 1.036 5.593 0.226 0.803 0.937 0.975
Channel Extra Net 0.135 0.932 5.241 0.211 0.821 0.945 0.980
Table 3: Ablation study on contribution of each module towards gain in depth prediction from semantics.

How to be “semantic-free” in evaluation?
Though semantic helps depth prediction, this idea relies on semantic features during the evaluation phase. If semantic is only utilized in the loss, it would not be needed in evaluation. We attempted to introduce a handcrafted semantic loss term as a weight guidance among image plane but it didn’t work well. Also we designed a transfer network which uses the predicted depth to predict semantic maps along with a reconstruction error to help in the training stage. The result in Table 4 shows a better result can be obtained by training from pretrained models.

Checkpoint Transfer Error-related metrics Accuracy-related metrics
Network Abs Rel Sq Rel RSME RSME log
Yin et al. [51] 0.155 1.296 5.857 0.233 0.793 0.931 0.973
Yin et al. [51] Yes 0.150 1.141 5.709 0.231 0.792 0.934 0.974
Yin et al. [51] +sn 0.149 1.060 5.567 0.226 0.796 0.935 0.975
Yin et al. [51] +sn Yes 0.145 0.994 5.422 0.222 0.806 0.939 0.976
Table 4: Ablation study shows gains in depth prediction using our proposed Transfer Network.

5.4 Optical Flow Estimation on KITTI

Using our best model for DepthNet and PoseNet in Section 5.2, we conducted rigid flow and full flow evaluation on KITTI [14]. We first generated the rigid flow from estimated depth and pose, which were compared with GeoNet [51]. Our model performed better in all the metrics shown in Table 5.

Method End Point Error Accuracy
Noc All Noc All
Yin et al. [51] 23.5683 29.2295 0.2345 0.2237
Ours 22.3819 26.8465 0.2519 0.2376
Table 5: Rigid flow prediction from first stage on KITTI on non-occluded regions(Noc) and overall regions(All).

We appended the semantic warping loss introduced in Section 4.2 to ResFlowNet in [51] and trained our model on KITTI stereo for 1600000 iterations. As demonstrated in Table 6, flow prediction improved in non-occluded region compared to GeoNet [51] and produced comparable results in overall regions.

Method End Point Error
Noc All
DirFlowNetS 6.77 12.21
Yin et al. [51] 8.05 10.81
Ours 7.66 13.91
Table 6: Full flow prediction on KITTI 2015 on non-occluded regions(Noc) and overall regions(All). We show DirFlowNetS numbers from Yin et al.[51]

5.5 Category-Specific Metrics Evaluation

This section will present the improvements by semantic categories. As shown in the bar-chart in Fig 7, most improvements were shown in “Vehicle” and “Dynamic” classes, where errors are generally large. Our network did not improve much for other less frequent categories, such as “Motorcycle”, which are generally more difficult to segment in images.

Figure 7: Performance gains in depth (left) and flow (right) among different classes of dynamic objects.

6 Conclusion

In SIGNet, we strive to achieve robust performance for geometry perception (depth and motion) without using any geometric labels. To achieve this goal, SIGNet utilizes the well-known supervised learning semantic segmentation frameworks to create spatial constraints on the geometric attributes of the pixels. We present novel methods of feature augmentation and loss augmentation to include semantic labels in the geometry predictions. This work presents a first of a kind approach which moves away from pixel-level to object-level depth and motion prediction tasks. Most notably, this approach significantly improves the state-of-the-art solution for estimating the depth and motion of dynamic objects.

Supplementary Material for SIGNet

Here we present extra visualization results to help readers understand where our semantic-aided model improved the most. We compared the prediction result from our best model in Table 1 with Yin et al. [51] and ground truth. We followed [13] to plot the prediction result using disparity heatmaps. The following results show that our model can gain improvement from regions belonging to cars and other dynamic classes.

Figure 1: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 2: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 3: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 4: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 5: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 6: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 7: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 8: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 9: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects
Figure 10: Top to bottom: input image, semantic segmentation, instance segmentation, ground truth disparity map, disparity prediction from baseline(Yin et al. [51]) , disparity prediction from ours, AbsRel error map of baseline models, AbsRel error map of ours and the improvement region compared to baseline. For the purpose of visualization, disparity maps are interpolated and cropped[13]. For all heatmaps, darker means smaller value (disparity, error or improvement). Typical image regions where we do better include cars, pedestrians and other common dynamic objects

References