Self-Supervised Scene De-occlusion

04/06/2020 ∙ by Xiaohang Zhan, et al. ∙ 37

Natural scene understanding is a challenging task, particularly when encountering images of multiple objects that are partially occluded. This obstacle is given rise by varying object ordering and positioning. Existing scene understanding paradigms are able to parse only the visible parts, resulting in incomplete and unstructured scene interpretation. In this paper, we investigate the problem of scene de-occlusion, which aims to recover the underlying occlusion ordering and complete the invisible parts of occluded objects. We make the first attempt to address the problem through a novel and unified framework that recovers hidden scene structures without ordering and amodal annotations as supervisions. This is achieved via Partial Completion Network (PCNet)-mask (M) and -content (C), that learn to recover fractions of object masks and contents, respectively, in a self-supervised manner. Based on PCNet-M and PCNet-C, we devise a novel inference scheme to accomplish scene de-occlusion, via progressive ordering recovery, amodal completion and content completion. Extensive experiments on real-world scenes demonstrate the superior performance of our approach to other alternatives. Remarkably, our approach that is trained in a self-supervised manner achieves comparable results to fully-supervised methods. The proposed scene de-occlusion framework benefits many applications, including high-quality and controllable image manipulation and scene recomposition (see Fig. 1), as well as the conversion of existing modal mask annotations to amodal mask annotations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 6

page 7

page 8

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Scene understanding is one of the foundations of machine perception. A real-world scene, regardless of its context, often comprises multiple objects of varying ordering and positioning, with one or more object(s) being occluded by other object(s). Hence, scene understanding systems should be able to process modal perception, i.e., parsing the directly visible regions, as well as amodal perception [12, 23, 16], i.e., perceiving the intact structures of entities including invisible parts. The advent of advanced deep networks along with large-scale annotated datasets has facilitated many scene understanding tasks, e.g., object detection [6, 26, 31, 2], scene parsing [22, 3, 33], and instance segmentation [4, 9, 1, 30]. Nonetheless, these tasks mainly concentrate on modal perception, while amodal perception remains rarely explored to date.

A key problem in amodal perception is scene de-occlusion, which involves the subtasks of recovering the underlying occlusion ordering and completing the invisible parts of occluded objects. While human vision system is capable of intuitively performing scene de-occlusion, elucidation of occlusions is highly challenging for machines. First, the relationships between an object that occludes other object(s), called an ‘‘occluder’’, and an object that is being occluded by other object(s), called an ‘‘occludee’’, is profoundly complicated. This is especially true when there are multiple ‘‘occluders’’ and ‘‘occludees’’ with high intricacies between them, namely an ‘‘occluder’’ that occludes multiple ‘‘occludees’’ and an ‘‘ocludee’’ that is occluded by multiple ‘‘occluders’’, forming a complex occlusion graph. Second, depending on the category, orientation, and position of objects, the boundaries of ‘‘occludee(s)’’ are elusive; no simple priors can be applied to recover the invisible boundaries.

A possible solution for scene de-occlusion is to train a model with ground truth of occlusion orderings and amodal masks (i.e., intact instance masks). Such ground truth can be obtained either from synthetic data [5, 11] or from manual annotations on real-world data [34, 25, 7], each of which with specific limitations. The former introduces inevitable domain gap between the fabricated data used for training and the real-world scene in testing. The latter relies on subjective interpretation of individual annotators to demarcate occluded boundaries, therefore subjected to biases, and requires repeated annotations from different annotators to reduce noise, therefore are laborious and costly. A more practical and scalable way is to learn scene de-occlusion from the data itself rather than annotations.

Figure 2: Given an input image and the associated modal masks, our framework solves scene de-occlusion progressively -- 1) predicts occlusion ordering between different objects as a directed graph, 2) performs amodal completion grounded on the ordering graph, and 3) furnishes the occluded regions with content under the guidance of amodal predictions. The de-occlusion is achieved by two novel networks, PCNet-M and PCNet-C, which are trained without annotations of ordering or amodal masks.

In this work, we propose a novel self-supervised framework that tackles scene de-occlusion on real-world data without manual annotations of occlusion ordering or amodal masks

. In the absence of ground truth, an end-to-end supervised learning framework is not applicable anymore. We therefore introduce a unique concept of

partial completion of occluded objects. There are two core precepts in the partial completion notion that enables attainment of scene de-occlusion in a self-supervised manner. First, the process of completing an ‘‘occludee’’ occluded by multiple ‘‘occluders’’ can be broken down into a sequence of partial completions, with one ‘‘occluder’’ involved at a time. Second, the learning of making partial completion can be achieved by further trimming down the ‘‘occludee’’ deliberately and training a network to recover the previous untrimmed occludee. We show that partial completion is sufficient to complete an occluded object progressively, as well as to facilitate the reasoning of occlusion ordering.

Partial completion is executed via two networks, i.e., Partial Completion Network-mask and -content. We abbreviate them as PCNet-M and PCNet-C, respectively. PCNet-M is trained to partially recover the invisible mask of the ‘‘occludee’’ corresponding to an occluder, while PCNet-C is trained to partially fill in the recovered mask with RGB content. PCNet-M and PCNet-C form the two core components of our framework to address scene de-occlusion.

As illustrated in Fig. 2, the proposed framework takes a real-world scene and its corresponding modal masks of objects, derived from either annotations or predictions of existing modal segmentation techniques, as inputs. Our framework then streamlines three subtasks to be tackled progressively: 1) Ordering Recovery. Given a pair of neighboring objects in which one can be occluding the other, following the principle that PCNet-M partially completes the mask of the ‘‘occludee’’ while keeping the ‘‘occluder’’ unmodified, the roles of the two objects are determined. We recover the ordering of all neighboring pairs and obtain a directed graph that captures the occlusion order among all objects. 2) Amodal Completion. For a specific ‘‘occludee’’, the ordering graph indicates all its ‘‘occluders’’. Grounded on this information and reusing PCNet-M, an amodal completion method is devised to fully complete the modal mask into an amodal mask of the ‘‘occludee’’. 3) Content Completion. The predicted amodal mask indicates the occluded region of an ‘‘occludee’’. Using PCNet-C, we furnish RGB content into the invisible region. With such a progressive framework, we decompose a complicated scene into isolated and intact objects, along with a highly accurate occlusion ordering graph, allowing subsequent manipulation on the ordering and positioning of objects to recompose a new scene, as shown in Fig. 1.

We summarize our contributions as follows: 1) We streamline scene de-occlusion into three subtasks, namely ordering recovery, amodal completion, and content completion. 2) We propose PCNets and a novel inference scheme to perform scene de-occlusion without the need for corresponding manual annotations. Yet, we observe comparable results to fully-supervised approaches on datasets of real scenes. 3) The self-supervised nature of our approach shows its potential to endow large-scale instance segmentation datasets, e.g., KITTI [8], COCO [19], etc., with high-accuracy ordering and amodal annotations. 4) Our scene de-occlusion framework represents a novel enabling technology for real-world scene manipulation and recomposition, providing a new dimension for image editing.

2 Related Work

Figure 3: The training procedure of the PCNet-M and the PCNet-C. Given an instance A as the input, we randomly sample another instance B from the whole dataset and position it randomly. Note that we only have modal masks of both A and B. (a) PCNet-M is trained by switching two cases. Case 1 (A erased by B) follows the partial completion mechanism where PCNet-M is encouraged to partially complete A. Case 2 prevents PCNet-M from over completing A. (b) PCNet-C uses to erase A and learn to fill in the RGB content of the erased region. It also takes in as an additional input. The modal mask of is multiplied with its category id if available.

Ordering Recovery. In the unsupervised stream, Wu et al. [32] propose to recover ordering by re-composing the scene with object templates. However, they only demonstrate the system on toy data. Tighe et al. [29] build a prior occlusion matrix between classes on the training set and minimize quadratic programming to recover the ordering in testing. The inter-class occlusion prior ignores the complexity of realistic scenes. Other works [10, 24] rely on additional depth cues. However, depth is not reliable in occlusion reasoning, e.g., there is no depth difference if a piece of paper lies on a table. The assumption made by these works that farther objects are occluded by close ones also does not always hold. For example, as shown in Fig. 2. The plate (#1) is occluded by the coffee cup (#5), while the cup is farther in depth. In the supervised stream, several works manually annotate occlusion ordering [34, 25] or rely on synthetic data [11] to learn the ordering in a fully-supervised manner. Another stream of works on panoptic segmentation [21, 15] design end-to-end training procedures to resolve overlapping segments. However, they do not explicitly recover the full scene ordering.

Amodal Instance Segmentation. Modal segmentation, such as semantic segmentation [3, 33] and instance segmentation [4, 9, 1], aims at assigning categorical or object labels to visible pixels. Existing approaches for modal segmentation are not able to solve the de-occlusion problem. Different from modal segmentation, amodal instance segmentation aims at detecting objects as well as recovering the amodal (integrated) masks of them. Li et al. [17] produces dummy supervision through pasting artificial occluders, while the absence of explicit ordering increases the difficulty when complicated occlusion relationship is present. Other works take a fully-supervised learning approach by using either manual annotations [34, 25, 7] or synthetic data [11]. As mentioned above, it is costly and inaccurate to annotate invisible masks manually. Approaches relying on synthetic data are also confronted with domain gap issues. On the contrary, our approach can convert modal masks into amodal masks in a self-supervised manner. This unique ability facilitates the training of amodal instance segmentation networks without manual amodal annotations.

Amodal Completion.

Amodal completion is slightly different from amodal instance segmentation. In amodal completion, modal masks are given at test time and the task is to complete the modal masks into amodal masks. Previous works on amodal completion typically rely on heuristic assumptions on the invisible boundaries to perform amodal completion with given ordering relationships. Kimia 

et al. [14] propose to adopt Euler Spiral in amodal completion. Lin et al. [18] use cubic Bézier curves. Silberman et al. [28] apply curve primitives including straight lines and parabolas. Since these studies still require ordering as the input, they cannot be adopted directly to solve de-occlusion problem. Besides, these unsupervised approaches mainly focus on toy examples with simple shapes. Kar et al. [13] use keypoint annotations to align 3D object templates to 2D image objects, so as to generate the ground truth of amodal bounding boxes. Ehsani et al. [5] leverage 3D synthetic data to train an end-to-end amodal completion network. Similar to unsupervised methods, our framework does not need annotations of amodal masks or any kind of 3D/synthetic data. In contrast, our approach is able to solve amodal completion in highly cluttered natural scenes, whereas other unsupervised methods fall short.

3 Our Scene De-occlusion Approach

The proposed framework aims at 1) recovering occlusion ordering and 2) completing amodal masks and content of occluded objects. To cope with the absence of manual annotations of occlusion ordering and amodal masks, we design a way to train the proposed PCNet-M and PCNet-C to complete instances partially in a self-supervised manner. With the trained networks, we further propose a progressive inference scheme to perform ordering recovery, ordering-grounded amodal completion, and amodal-constrained content completion to complete objects.

3.1 Partial Completion Networks (PCNets)

Given an image, it is easy to obtain the modal masks of objects via off-the-shelf instance segmentation frameworks. However, their amodal masks are unavailable. Even worse, we do not know whether these modal masks are intact, making the learning of full completion of an occluded instance extremely challenging. The problem motivates us to explore self-supervised partial completion.

Motivation. Suppose an instance’s modal mask constitutes a pixel set , we denote the ground truth amodal mask as . Supervised approaches solve the full completion problem of , where denotes the full completion model. This full completion process can be broken down into a sequence of partial completions if the instance is occluded by multiple ‘‘occluders’’, where is the intermediate states, denotes the partial completion model.

Since we still do not have any ground truth to train the partial completion step , we take a step back by further trimming down randomly to obtain s.t. . Then we train via . The self-supervised partial completion approximates the supervised one, laying the foundation of our PCNets. Based on such a self-supervised notion, we introduce Partial Completion Networks (PCNets). They contain two networks, respectively, for mask (PCNet-M) and content completion (PCNet-C).

PCNet-M for Mask Completion. The training of PCNet-M is shown in Fig. 3 (a). We first prepare the training data. Given an instance A along with its modal mask from the dataset with instance-level annotations, we randomly sample another instance B from and position it randomly to acquire a mask . Here we regard and as sets of pixels. There are two input cases, in which different input is fed to the network:

1) The first case corresponds to the aforementioned partial completion strategy. We define as an eraser, and use B to erase part of A to obtain . In this case, the PCNet-M is trained to recover the original modal mask from , conditioned on .

2) The second case serves as a regularization to discourage the network from over-completing an instance if the instance is not occluded. Specifically, that does not invade A is regarded as the eraser. In this case, we encourage the PCNet-M to retain the original modal mask , conditioned on . Without case 2, the PCNet-M always encourage increment of pixels, which may result in over-completion of an instance if it is not occluded by other neighboring instances.

In both cases, the erased image patch serves as an auxiliary input. We formulate the loss functions as follows:

(1)

where is our PCNet-M network, represents the parameters to optimize, is the image patch, is Binary Cross-Entropy Loss. We formulate the final loss function as , where

is the probability to choose case 1. The random switching between the two cases forces the network to understand the ordering relationship between the two neighboring instances from their shapes and border, so as to determine whether to complete the instance or not.

PCNet-C for Content Completion. PCNet-C follows a similar intuition of PCNet-M, while the target to complete is RGB content. As shown in Fig. 3 (b), the input instances A and B are the same as that for PCNet-M. Image pixels in region are erased, and PCNet-C aims at predicting the missing content. Besides, PCNet-C also takes in the remaining mask of A, i.e.,

to indicate that it is A rather than other objects, that is painted. Hence, it cannot be simply replaced by standard image inpainting approaches. The loss of PCNet-C to minimize is formulated as follows:

(2)

where is our PCNet-C network, is the image patch, represents the loss function consisting of common losses in image inpainting including , perceptual and adversarial loss. Similar to PCNet-M, the training of PCNet-C via learning partial completion enables full completion of the instance content at test time.

Figure 4: Dual-Completion for ordering recovery. To recover the ordering between a pair of neighboring instances and , we switch the role of the target object (in white) and the eraser (in gray). The increment of is larger than that of , thus is identified as the ‘‘occludee’’.

3.2 Dual-Completion for Ordering Recovery

The target ordering graph is composed of pair-wise occlusion relationships between all neighboring instance pairs. A neighboring instance pair is defined as two instances whose modal masks are connected, thus one of them possibly occludes the other. As shown in Fig. 4, given a pair of neighboring instances and , we first regard ’s modal mask as the target to complete. serves as the eraser to obtain the increment of , i.e., . Symmetrically, we also obtain the increment of conditioned on , i.e., . The instance gaining a larger increment in partial completion is supposed to be the ‘‘occludee’’. Hence, we infer the order between and via comparing their incremental area, as follows:

(3)

where indicates that occludes . If and are not neighboring, . Note that in practice the probability of is zero, thus does not need to be specifically considered here. Performing Dual-Completion for all neighboring pairs provides us the scene occlusion ordering, which can be represented as a directed graph as shown in Fig. 2. The nodes in the graph represent objects, while edges indicate the directions of occlusion between neighboring objects. Note that it is not necessarily to be acyclic, as shown in Fig. 7.

Figure 5: (a) Ordering-grounded amodal completion takes the modal mask of the target object (#3) and all its ancestors (#2, #4), as well as the erased image as inputs. With the trained PCNet-M, it predicts the amodal mask of object #3. (b) The intersection of the amodal mask and the ancestors indicates the invisible region of object #3. Amodal-constrained content completion (red arrows) adopts the PCNet-C to fill in the content in the invisible region.

3.3 Amodal and Content Completion

Ordering-Grounded Amodal Completion.

We can perform ordering-grounded amodal completion after estimating the ordering graph. Suppose we need to complete an instance

, we first find all ancestors of in the graph as the ‘‘occluders’’ of this instance via breadth-first searching (BFS). Since the graph is not necessarily to be acyclic, we adapt the BFS algorithm accordingly. Interestingly, we find that the trained PCNet-M is generalizable to use the union of all ancestors as the eraser. Hence, we do not need to iterate the ancestors and apply PCNet-M to partially complete step by step. Instead, we perform amodal completion in one step conditioned on the union of all ancestors’ modal masks. Denoting the ancestors of as , we perform amodal completion as follows:

(4)

where is the result of amodal mask, is the modal mask of -th ancestor. An example is shown in Fig. 5 (a). Fig. 6 shows the reason we use all ancestors rather than only the first-order ancestor.

Amodal-Constrained Content Completion. In previous steps, we obtain the occlusion ordering graph and the predicted amodal mask of each instance. Next, we complete the occluded content of them. As shown in Fig. 5 (b), the intersection of predicted amodal mask and the ancestors indicates the missing part of , regarded as the eraser for PCNet-C. Then we apply a trained PCNet-C to fill in the content as follows:

(5)

where is the decomposed content of from the scene. For background contents, we use the union of all foreground instances as the eraser. Different from image inpainting that is unaware of occlusion, content completion is performed on the estimated occluded regions.

Figure 6: This figure shows why we need to find all ancestors rather than only the first-order ancestors, though higher-order ancestors do not directly occlude this instance. Higher-order ancestors (e.g., instance #3) may indirectly occlude the target instance (#1), thus need to be taken into account.

4 Experiments

We now evaluate our method in various applications including ordering recovery, amodal completion, amodal instance segmentation, and scene manipulation. The implementation details and more qualitative results can be found in the supplementary materials.

Datasets. 1) KINS [25], originated from KITTI [8], is a large-scale traffic dataset with annotated modal and amodal masks of instances. PCNets are trained on the training split (7,474 images, 95,311 instances) with modal annotations. We test our de-occlusion framework on the testing split (7,517 images, 92,492 instances). 2) COCOA [34] is a subset of COCO2014 [19] while annotated with pair-wise ordering, modal, and amodal masks. We train PCNets on the training split (2,500 images, 22,163 instances) using modal annotations and test on the validation split (1,323 images, 12,753 instances). The categories of instance are unavailable for this dataset. Hence, we set the category id constantly as 1 in training PCNets for this dataset.

4.1 Comparison Results

Ordering Recovery. We report ordering recovery performance on COCOA and KINS in Table 1. We reproduced the OrderNet proposed in [34] to obtain the supervised results. Baselines include sorting bordered instance pairs by Area111We optimize this heuristic depending on each dataset – a larger instance is treated as a front object for KINS, and opposite for COCOA., Y-axis (instance closer to image bottom in front), and Convex prior. For baseline Convex, we compute convex hull on modal masks to approximate amodal completion, and the object with more increments is regarded as the occludee. All baselines have been adjusted to achieve their respective best performances. On both benchmarks, our method achieves much higher accuracies than baselines, comparable to the supervised counterparts. An interesting case is shown in Fig. 7, where four objects are circularly overlapped. Since our ordering recovery algorithm recovers pair-wise ordering rather than sequential ordering, it is able to solve this case and recover the cyclic directed graph.

Figure 7: Our framework is able to solve circularly occluded cases. Since such case is rare, we cut four pieces of paper to compose it.
method gt order (train) COCOA KINS
Supervised
OrderNetM [34] 81.7 87.5
OrderNetM+I [34] 88.3 94.1
Unsupervised
Area 62.4 77.4
Y-axis 58.7 81.9
Convex 76.0 76.3
Ours 87.1 92.5
Table 1: Ordering estimation on COCOA validation and KINS testing sets, reported with pair-wise accuracy on occluded instance pairs.

Amodal Completion. We first introduce the baselines. For the supervised method, amodal annotation is available. A UNet is trained to predict amodal masks from modal masks end-to-end. Raw means no completion is performed. Convex represents computing the convex hull of the modal mask as the amodal mask. Since the convex hull usually leads to over-completion, i.e., extending the visible mask, we improve this baseline by using predicted order to refine the convex hull, constituting a stronger baseline: ConvexR. It performs pretty well for naturally convex objects. Ours (NOG) represents the non-ordering-grounded amodal completion that relies on our PCNet-M and regards all neighboring objects as the eraser rather than using occlusion ordering to search the ancestors. Ours (OG) is our ordering-grounded amodal completion method.

We evaluate amodal completion on ground truth modal masks, as shown in Table 2. Our method surpasses the baseline approaches and are comparable to the supervised counterpart. The comparison between OG and NOG shows the importance of ordering in amodal completion. As shown in Fig. 9, some of our results are potentially more natural than manual annotations.

method
amodal
(train)
COCOA
%mIoU
KINS
%mIoU
Supervised 82.53 94.81
Raw 65.47 87.03
ConvexR 74.43 90.75
Ours (NOG) 76.91 93.42
Ours (OG) 81.35 94.76
Table 2: Amodal completion on COCOA validation and KINS testing sets, using ground truth modal masks.

Apart from using ground truth modal masks as the input in testing, we also verify the effectiveness of our approach with predicted modal masks as the input. Specifically, we train a UNet to predict modal masks from an image. In order to correctly match the modal and the corresponding ground truth amodal masks in evaluation, we use the bounding box as an additional input to this network. We predict the modal masks on the testing set, yielding 52.7% mAP to the ground truth modal masks. We use the predicted modal masks as the input to perform amodal completion. As shown in Table 3, our approach still achieves high performance, comparable to the supervised counterpart.

method amodal (train) KINS %mIoU
Supervised 87.29
Raw 82.05
ConvexR 84.12
Ours (NOG) 85.39
Ours (OG) 86.26
Table 3: Amodal completion on KINS testing set, using predicted modal masks (mAP 52.7%).
Ann. source modal (train) amodal (train) %mAP
GT [25] 29.3
Raw 22.7
Convex 22.2
ConvexR 25.9
Ours 29.3
Table 4: Amodal instance segmentation on KINS testing set. ConvexR means using predicted order to refine the convex hull. In this experimental setting, all methods detect and segment instances from raw images. Hence, modal masks are not used in testing.
Figure 8: By training the self-supervised PCNet-M on a modal dataset (e.g., KITTI shown here) and applying our amodal completion algorithm on the same dataset, we are able to freely convert modal annotations into pseudo amodal annotations. Note that such self-supervised conversion is intrinsically different from training a supervised model on a small labeled amodal dataset and applying it to a larger modal dataset, where the generalizability between different datasets can be an issue.
Figure 9: Amodal completion results. Our results are potentially more natural than manual annotations (GT) in some cases, especially for instances in yellow.
Figure 10: Scene synthesis by changing the ordering graph. Reversed orderings are shown in red arrows. Uncommon cases with circular ordering can also be synthesized.
Figure 11: This figure shows rich and high-quality manipulations, including deleting, swapping, shifting and repositioning instances, enabled by our approach. The baseline method modal-based manipulation is based on image inpainting, where modal masks are provided, order and amodal masks are unknown. Better in zoomed-in view. More examples can be found in the supplementary material.

Label Conversation for Amodal instance segmentation. Amodal instance segmentation aims at detecting instances and predicting amodal masks from images simultaneously. With our approach, one can convert an existing dataset with modal annotations into the one with pseudo amodal annotations, thus allowing amodal instance segmentation network training without manual amodal annotations. This is achieved by training PCNet-M on the modal mask training split, and applying our amodal completion algorithm on the same training split to obtain the corresponding amodal masks, as shown in Fig. 8, To evaluate the quality of the pseudo amodal annotations, we train a standard Mask R-CNN [9] for amodal instance segmentation following the setting in [25]. All baselines follow the same training protocol, except that the amodal annotations for training are different. As shown in Table 4, using our inferred amodal bounding boxes and masks, we achieve the same performance (mAP 29.3%) as the one using manual amodal annotations. Besides, our inferred amodal masks in the training set are highly consistent with the manual annotations (mIoU 95.22%). The results suggest a high applicability of our method for obtaining reliable pseudo amodal mask annotations, relieving burdens of manual annotation on large-scale instance-level datasets.

4.2 Application on Scene Manipulation

Our scene de-occlusion framework allows us to decompose a scene into the background and isolated completed objects, along with an occlusion ordering graph. Therefore, manipulating scenes by controlling order and positions is made possible. Fig. 10 shows scene synthesis by controlling order only. Fig. 11 shows more manipulation cases, indicating that our de-occlusion framework, though trained without any extra information compared to the baseline, enables high-quality occlusion-aware manipulation.

5 Conclusion

To summarize, we have proposed a unified scene de-occlusion framework equipped with self-supervised PCNets trained without ordering or amodal annotations. The framework is applied in a progressive way to recover occlusion orderings, then perform amodal and content completion. It achieves comparable performances to the fully-supervised counterparts on real-world datasets. It is applicable to convert existing modal annotations to amodal annotations. Quantitative results show their equivalent efficacy to manual annotations. Furthermore, our framework enables high-quality occlusion-aware scene manipulation, providing a new dimension for image editing.

Acknowledgement: This work is supported by the SenseTime-NTU Collaboration Project, Collaborative Research grant from SenseTime Group (CUHK Agreement No. TS1610626 & No. TS1712093), Singapore MOE AcRF Tier 1 (2018-T1-002-056), NTU SUG, and NTU NAP.

References

  • [1] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, et al. (2019) Hybrid task cascade for instance segmentation. In CVPR, pp. 4974–4983. Cited by: §1, §2.
  • [2] K. Chen, J. Wang, S. Yang, X. Zhang, Y. Xiong, C. Change Loy, and D. Lin (2018-06) Optimizing video object detection via a scale-time lattice. In CVPR, Cited by: §1.
  • [3] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI 40 (4), pp. 834–848. Cited by: §1, §2.
  • [4] J. Dai, K. He, Y. Li, S. Ren, and J. Sun (2016) Instance-sensitive fully convolutional networks. In ECCV, pp. 534–549. Cited by: §1, §2.
  • [5] K. Ehsani, R. Mottaghi, and A. Farhadi (2018) Segan: segmenting and generating the invisible. In CVPR, pp. 6144–6153. Cited by: §1, §2.
  • [6] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester (2010) Cascade object detection with deformable part models. In CVPR, pp. 2241–2248. Cited by: §1.
  • [7] P. Follmann, R. K. Nig, P. H. Rtinger, M. Klostermann, and T. B. Ttger (2019) Learning to see the invisible: end-to-end trainable amodal instance segmentation. In WACV, pp. 1328–1336. Cited by: §1, §2.
  • [8] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §1, §4.
  • [9] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In ICCV, Cited by: §1, §2, §4.1.
  • [10] D. Hoiem, A. N. Stein, A. A. Efros, and M. Hebert (2007) Recovering occlusion boundaries from a single image. In ICCV, Cited by: §2.
  • [11] Y. Hu, H. Chen, K. Hui, J. Huang, and A. G. Schwing (2019) SAIL-vos: semantic amodal instance level video object segmentation-a synthetic dataset and baselines. In CVPR, pp. 3105–3115. Cited by: §1, §2, §2.
  • [12] G. Kanizsa (1979) Organization in vision: essays on gestalt perception. Praeger Publishers. Cited by: §1.
  • [13] A. Kar, S. Tulsiani, J. Carreira, and J. Malik (2015) Amodal completion and size constancy in natural scenes. In ICCV, pp. 127–135. Cited by: §2.
  • [14] B. B. Kimia, I. Frankel, and A. Popescu (2003) Euler spiral for shape completion. IJCV 54 (1-3), pp. 159–182. Cited by: §2.
  • [15] J. Lazarow, K. Lee, and Z. Tu (2019) Learning instance occlusion for panoptic segmentation. arXiv preprint arXiv:1906.05896. Cited by: §2.
  • [16] S. Lehar (1999) Gestalt isomorphism and the quantification of spatial perception. Gestalt theory 21, pp. 122–139. Cited by: §1.
  • [17] K. Li and J. Malik (2016) Amodal instance segmentation. In ECCV, pp. 677–693. Cited by: §2.
  • [18] H. Lin, Z. Wang, P. Feng, X. Lu, and J. Yu (2016) A computational model of topological and geometric recovery for visual curve completion. Computational Visual Media 2 (4), pp. 329–342. Cited by: §2.
  • [19] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, and P. Dollár (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1, §4.
  • [20] G. Liu, F. A. Reda, K. J. Shih, T. Wang, A. Tao, and B. Catanzaro (2018) Image inpainting for irregular holes using partial convolutions. In ECCV, Cited by: Appendix A, Appendix A.
  • [21] H. Liu, C. Peng, C. Yu, J. Wang, X. Liu, G. Yu, and W. Jiang (2019) An end-to-end network for panoptic segmentation. In CVPR, pp. 6172–6181. Cited by: §2.
  • [22] Z. Liu, X. Li, P. Luo, C. Loy, and X. Tang (2015) Semantic image segmentation via deep parsing network. In ICCV, Cited by: §1.
  • [23] S. E. Palmer (1999) Vision science: photons to phenomenology. MIT press. Cited by: §1.
  • [24] P. Purkait, C. Zach, and I. Reid (2019) Seeing behind things: extending semantic segmentation to occluded regions. arXiv preprint arXiv:1906.02885. Cited by: §2.
  • [25] L. Qi, L. Jiang, S. Liu, X. Shen, and J. Jia (2019) Amodal instance segmentation with kins dataset. In CVPR, pp. 3014–3023. Cited by: §1, §2, §2, §4.1, Table 4, §4.
  • [26] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, Cited by: §1.
  • [27] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: Appendix A.
  • [28] N. Silberman, L. Shapira, R. Gal, and P. Kohli (2014) A contour completion model for augmenting surface reconstructions. In ECCV, Cited by: §2.
  • [29] J. Tighe, M. Niethammer, and S. Lazebnik (2014) Scene parsing with object instances and occlusion ordering. In CVPR, pp. 3748–3755. Cited by: §2.
  • [30] J. Wang, K. Chen, R. Xu, Z. Liu, C. C. Loy, and D. Lin (2019-10) CARAFE: content-aware reassembly of features. In ICCV, Cited by: §1.
  • [31] J. Wang, K. Chen, S. Yang, C. C. Loy, and D. Lin (2019) Region proposal by guided anchoring. In CVPR, Cited by: §1.
  • [32] J. Wu, J. B. Tenenbaum, and P. Kohli (2017) Neural scene de-rendering. In CVPR, Cited by: §2.
  • [33] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In CVPR, pp. 2881–2890. Cited by: §1, §2.
  • [34] Y. Zhu, Y. Tian, D. Metaxas, and P. Dollár (2017) Semantic amodal segmentation. In CVPR, pp. 1464–1472. Cited by: §1, §2, §2, §4.1, Table 1, §4.

Appendix A Implementation Details

In our experiments, the backbone for PCNet-M is UNet [27] with a widening factor 2, and that for PCNet-C is a UNet equipped with partial convolution layers [20]; while note that PCNets do not have restrictions on backbone architectures. For both PCNets, the image or mask patches centering on an object are cropped by an adaptive square and resized to 256x256 as inputs.

For COCOA, the PCNet-M is trained using SGD for 56K iterations with an initial learning rate 0.001 decayed at iterations 32K and 48K by 0.1. For KINS, we stop the training process earlier at 32K. The batch size is 256 distributed on 8 GPUs (GTX 1080 TI). The hyper-parameter that balances the two cases in training PCNet-M is set to . In current experiments, we do not use RGB as an input to PCNet-M, since we empirically find that introducing RGB through concatenation makes little differences. It is probably because for these two datasets, modal masks are informative enough for training; while we believe in more complicated scenes, RGB will exert more influence if introduced in a better way.

For PCNet-C, we modify the UNet to take in the concatenation of image and modal mask as the input. Apart from the losses in  [20]

, we add an extra adversarial loss for optimization. The discriminator is a stack of 5 convolution layers with spectral normalization and leaky ReLU (slope=

). The PCNet-C is fine-tuned for 450K iterations with a constant learning rate from a pre-trained inpainting network [20]. We adapt the pre-trained weights to be compatible for taking in the additional modal mask.

Appendix B Discussions

b.1 Analysis on varying occlusion ratio.

Fig. 12 show the amodal completion performances of different approaches under varying ratios of occluded area. Naturally, larger occlusion ratios result in lower performances. Under high occlusion ratios, our full method (Ours (OG)) surpasses the baseline methods by a large margin.

b.2 Does it support mutual occlusion?

As a drawback, our approach does not support cases where two objects are mutually occluded as shown in  13, because our approach focuses on object-level de-occlusion. For mutual occlusions, the ordering graph cannot be defined, therefore fine-grained boundary-level de-occlusion is required. It leaves an open question to scene de-occlusion problem. Nonetheless, our approach works well if more than two objects are cyclically occluded as shown in Fig. 7 in the main paper.

b.3 Will case 2 mislead PCNet-M?

As shown in Fig.14, one may have concerns that in case (a-2) when not-to-complete strategy is applied, the boundary between and might include a contour shown in green where is occluded by a real object, namely . Therefore, it might teach PCNet-M a wrong lesson if the yellow shaded region is taught not to be filled.

Figure 12: Performances of different approaches under a growing occlusion ratio, evaluated on KINS testing set.
Figure 13: Mutual occlusion cases. Green boundaries show one object occlude the other and red boundaries vice versa.
Figure 14: (a-1) and (a-2) represent case 1 and case 2 in training, respectively; (b) - (d) represent possible cases in testing. Among the test cases, only the A in (b) will be completed.

Here we explain why it will not teach PCNet-M the wrong lesson. First of all, PCNet-M learns to complete or not to complete the target object conditioned on a surrogate occluder. As shown in Fig. 14, as PCNet-M is taught to complete in (a-1) while not to complete in (a-2), it has to discover cues indicating that is below in (a-1) and is above in (a-2). The cues might include the shape of two objects, the shape of common boundary, junctions, etc. In testing time, e.g. in (b) when regarding the real as the condition, it is easy for PCNet-M to tell that is above from those cues. Therefore PCNet-M actually inclines to case 1, when will be completed conditioned on .

Then which case does this not-to-complete strategy affect? The case in (c) shares very similar occlusion patterns with (a-2), especially in the upper right part of the common boundary, showing strong cues that is above , in which case PCNet-M will not complete as expected. However, case (c) is abnormal and unlikely to exist in the real world. The situation where the not-to-complete strategy really takes effect lies in case (d). In this case when strong cues indicate that is above , the PCNet-M is taught not to extend across boundary to invade .

Appendix C Visualization

As shown in Fig. 15, our approach enables us to freely adjust scene spatial configurations to re-compose new scenes. The quality could be further improved with the advance of image inpainting, since the PCNet-C shares a similar network architecture and training strategy to image inpainting.

Figure 15: Scene manipulation results based on our de-occlusion framework. Inconspicuous changes are marked with red arrows. A video demo can be found in the project page: https://xiaohangzhan.github.io/projects/deocclusion/.