Scene understanding is one of the foundations of machine perception. A real-world scene, regardless of its context, often comprises multiple objects of varying ordering and positioning, with one or more object(s) being occluded by other object(s). Hence, scene understanding systems should be able to process modal perception, i.e., parsing the directly visible regions, as well as amodal perception [12, 23, 16], i.e., perceiving the intact structures of entities including invisible parts. The advent of advanced deep networks along with large-scale annotated datasets has facilitated many scene understanding tasks, e.g., object detection [6, 26, 31, 2], scene parsing [22, 3, 33], and instance segmentation [4, 9, 1, 30]. Nonetheless, these tasks mainly concentrate on modal perception, while amodal perception remains rarely explored to date.
A key problem in amodal perception is scene de-occlusion, which involves the subtasks of recovering the underlying occlusion ordering and completing the invisible parts of occluded objects. While human vision system is capable of intuitively performing scene de-occlusion, elucidation of occlusions is highly challenging for machines. First, the relationships between an object that occludes other object(s), called an ‘‘occluder’’, and an object that is being occluded by other object(s), called an ‘‘occludee’’, is profoundly complicated. This is especially true when there are multiple ‘‘occluders’’ and ‘‘occludees’’ with high intricacies between them, namely an ‘‘occluder’’ that occludes multiple ‘‘occludees’’ and an ‘‘ocludee’’ that is occluded by multiple ‘‘occluders’’, forming a complex occlusion graph. Second, depending on the category, orientation, and position of objects, the boundaries of ‘‘occludee(s)’’ are elusive; no simple priors can be applied to recover the invisible boundaries.
A possible solution for scene de-occlusion is to train a model with ground truth of occlusion orderings and amodal masks (i.e., intact instance masks). Such ground truth can be obtained either from synthetic data [5, 11] or from manual annotations on real-world data [34, 25, 7], each of which with specific limitations. The former introduces inevitable domain gap between the fabricated data used for training and the real-world scene in testing. The latter relies on subjective interpretation of individual annotators to demarcate occluded boundaries, therefore subjected to biases, and requires repeated annotations from different annotators to reduce noise, therefore are laborious and costly. A more practical and scalable way is to learn scene de-occlusion from the data itself rather than annotations.
In this work, we propose a novel self-supervised framework that tackles scene de-occlusion on real-world data without manual annotations of occlusion ordering or amodal masks .
In the absence of ground truth, an end-to-end supervised learning framework is not applicable anymore.
We therefore introduce a unique concept of
. In the absence of ground truth, an end-to-end supervised learning framework is not applicable anymore. We therefore introduce a unique concept ofpartial completion of occluded objects. There are two core precepts in the partial completion notion that enables attainment of scene de-occlusion in a self-supervised manner. First, the process of completing an ‘‘occludee’’ occluded by multiple ‘‘occluders’’ can be broken down into a sequence of partial completions, with one ‘‘occluder’’ involved at a time. Second, the learning of making partial completion can be achieved by further trimming down the ‘‘occludee’’ deliberately and training a network to recover the previous untrimmed occludee. We show that partial completion is sufficient to complete an occluded object progressively, as well as to facilitate the reasoning of occlusion ordering.
Partial completion is executed via two networks, i.e., Partial Completion Network-mask and -content. We abbreviate them as PCNet-M and PCNet-C, respectively. PCNet-M is trained to partially recover the invisible mask of the ‘‘occludee’’ corresponding to an occluder, while PCNet-C is trained to partially fill in the recovered mask with RGB content. PCNet-M and PCNet-C form the two core components of our framework to address scene de-occlusion.
As illustrated in Fig. 2, the proposed framework takes a real-world scene and its corresponding modal masks of objects, derived from either annotations or predictions of existing modal segmentation techniques, as inputs. Our framework then streamlines three subtasks to be tackled progressively: 1) Ordering Recovery. Given a pair of neighboring objects in which one can be occluding the other, following the principle that PCNet-M partially completes the mask of the ‘‘occludee’’ while keeping the ‘‘occluder’’ unmodified, the roles of the two objects are determined. We recover the ordering of all neighboring pairs and obtain a directed graph that captures the occlusion order among all objects. 2) Amodal Completion. For a specific ‘‘occludee’’, the ordering graph indicates all its ‘‘occluders’’. Grounded on this information and reusing PCNet-M, an amodal completion method is devised to fully complete the modal mask into an amodal mask of the ‘‘occludee’’. 3) Content Completion. The predicted amodal mask indicates the occluded region of an ‘‘occludee’’. Using PCNet-C, we furnish RGB content into the invisible region. With such a progressive framework, we decompose a complicated scene into isolated and intact objects, along with a highly accurate occlusion ordering graph, allowing subsequent manipulation on the ordering and positioning of objects to recompose a new scene, as shown in Fig. 1.
We summarize our contributions as follows: 1) We streamline scene de-occlusion into three subtasks, namely ordering recovery, amodal completion, and content completion. 2) We propose PCNets and a novel inference scheme to perform scene de-occlusion without the need for corresponding manual annotations. Yet, we observe comparable results to fully-supervised approaches on datasets of real scenes. 3) The self-supervised nature of our approach shows its potential to endow large-scale instance segmentation datasets, e.g., KITTI , COCO , etc., with high-accuracy ordering and amodal annotations. 4) Our scene de-occlusion framework represents a novel enabling technology for real-world scene manipulation and recomposition, providing a new dimension for image editing.
2 Related Work
Ordering Recovery. In the unsupervised stream, Wu et al.  propose to recover ordering by re-composing the scene with object templates. However, they only demonstrate the system on toy data. Tighe et al.  build a prior occlusion matrix between classes on the training set and minimize quadratic programming to recover the ordering in testing. The inter-class occlusion prior ignores the complexity of realistic scenes. Other works [10, 24] rely on additional depth cues. However, depth is not reliable in occlusion reasoning, e.g., there is no depth difference if a piece of paper lies on a table. The assumption made by these works that farther objects are occluded by close ones also does not always hold. For example, as shown in Fig. 2. The plate (#1) is occluded by the coffee cup (#5), while the cup is farther in depth. In the supervised stream, several works manually annotate occlusion ordering [34, 25] or rely on synthetic data  to learn the ordering in a fully-supervised manner. Another stream of works on panoptic segmentation [21, 15] design end-to-end training procedures to resolve overlapping segments. However, they do not explicitly recover the full scene ordering.
Amodal Instance Segmentation. Modal segmentation, such as semantic segmentation [3, 33] and instance segmentation [4, 9, 1], aims at assigning categorical or object labels to visible pixels. Existing approaches for modal segmentation are not able to solve the de-occlusion problem. Different from modal segmentation, amodal instance segmentation aims at detecting objects as well as recovering the amodal (integrated) masks of them. Li et al.  produces dummy supervision through pasting artificial occluders, while the absence of explicit ordering increases the difficulty when complicated occlusion relationship is present. Other works take a fully-supervised learning approach by using either manual annotations [34, 25, 7] or synthetic data . As mentioned above, it is costly and inaccurate to annotate invisible masks manually. Approaches relying on synthetic data are also confronted with domain gap issues. On the contrary, our approach can convert modal masks into amodal masks in a self-supervised manner. This unique ability facilitates the training of amodal instance segmentation networks without manual amodal annotations.
Amodal Completion. Amodal completion is slightly different from amodal instance segmentation.
In amodal completion, modal masks are given at test time and the task is to complete the modal masks into amodal masks.
Previous works on amodal completion typically rely on heuristic assumptions on the invisible boundaries to perform amodal completion with given ordering relationships.
Amodal completion is slightly different from amodal instance segmentation. In amodal completion, modal masks are given at test time and the task is to complete the modal masks into amodal masks. Previous works on amodal completion typically rely on heuristic assumptions on the invisible boundaries to perform amodal completion with given ordering relationships. Kimiaet al.  propose to adopt Euler Spiral in amodal completion. Lin et al.  use cubic Bézier curves. Silberman et al.  apply curve primitives including straight lines and parabolas. Since these studies still require ordering as the input, they cannot be adopted directly to solve de-occlusion problem. Besides, these unsupervised approaches mainly focus on toy examples with simple shapes. Kar et al.  use keypoint annotations to align 3D object templates to 2D image objects, so as to generate the ground truth of amodal bounding boxes. Ehsani et al.  leverage 3D synthetic data to train an end-to-end amodal completion network. Similar to unsupervised methods, our framework does not need annotations of amodal masks or any kind of 3D/synthetic data. In contrast, our approach is able to solve amodal completion in highly cluttered natural scenes, whereas other unsupervised methods fall short.
3 Our Scene De-occlusion Approach
The proposed framework aims at 1) recovering occlusion ordering and 2) completing amodal masks and content of occluded objects. To cope with the absence of manual annotations of occlusion ordering and amodal masks, we design a way to train the proposed PCNet-M and PCNet-C to complete instances partially in a self-supervised manner. With the trained networks, we further propose a progressive inference scheme to perform ordering recovery, ordering-grounded amodal completion, and amodal-constrained content completion to complete objects.
3.1 Partial Completion Networks (PCNets)
Given an image, it is easy to obtain the modal masks of objects via off-the-shelf instance segmentation frameworks. However, their amodal masks are unavailable. Even worse, we do not know whether these modal masks are intact, making the learning of full completion of an occluded instance extremely challenging. The problem motivates us to explore self-supervised partial completion.
Motivation. Suppose an instance’s modal mask constitutes a pixel set , we denote the ground truth amodal mask as . Supervised approaches solve the full completion problem of , where denotes the full completion model. This full completion process can be broken down into a sequence of partial completions if the instance is occluded by multiple ‘‘occluders’’, where is the intermediate states, denotes the partial completion model.
Since we still do not have any ground truth to train the partial completion step , we take a step back by further trimming down randomly to obtain s.t. . Then we train via . The self-supervised partial completion approximates the supervised one, laying the foundation of our PCNets. Based on such a self-supervised notion, we introduce Partial Completion Networks (PCNets). They contain two networks, respectively, for mask (PCNet-M) and content completion (PCNet-C).
PCNet-M for Mask Completion. The training of PCNet-M is shown in Fig. 3 (a). We first prepare the training data. Given an instance A along with its modal mask from the dataset with instance-level annotations, we randomly sample another instance B from and position it randomly to acquire a mask . Here we regard and as sets of pixels. There are two input cases, in which different input is fed to the network:
1) The first case corresponds to the aforementioned partial completion strategy. We define as an eraser, and use B to erase part of A to obtain . In this case, the PCNet-M is trained to recover the original modal mask from , conditioned on .
2) The second case serves as a regularization to discourage the network from over-completing an instance if the instance is not occluded. Specifically, that does not invade A is regarded as the eraser. In this case, we encourage the PCNet-M to retain the original modal mask , conditioned on . Without case 2, the PCNet-M always encourage increment of pixels, which may result in over-completion of an instance if it is not occluded by other neighboring instances.
In both cases, the erased image patch serves as an auxiliary input.
We formulate the loss functions as follows:
In both cases, the erased image patch serves as an auxiliary input. We formulate the loss functions as follows:
where is our PCNet-M network, represents the parameters to optimize, is the image patch, is Binary Cross-Entropy Loss.
We formulate the final loss function as ,
where is the probability to choose case 1.
The random switching between the two cases forces the network to understand the ordering relationship between the two neighboring instances from their shapes and border, so as to determine whether to complete the instance or not.
is the probability to choose case 1. The random switching between the two cases forces the network to understand the ordering relationship between the two neighboring instances from their shapes and border, so as to determine whether to complete the instance or not.
PCNet-C for Content Completion.
PCNet-C follows a similar intuition of PCNet-M, while the target to complete is RGB content.
As shown in Fig. 3 (b), the input instances A and B are the same as that for PCNet-M.
Image pixels in region are erased, and PCNet-C aims at predicting the missing content.
Besides, PCNet-C also takes in the remaining mask of A, i.e., to indicate that it is A rather than other objects, that is painted.
Hence, it cannot be simply replaced by standard image inpainting approaches.
The loss of PCNet-C to minimize is formulated as follows:
to indicate that it is A rather than other objects, that is painted. Hence, it cannot be simply replaced by standard image inpainting approaches. The loss of PCNet-C to minimize is formulated as follows:
where is our PCNet-C network, is the image patch, represents the loss function consisting of common losses in image inpainting including , perceptual and adversarial loss. Similar to PCNet-M, the training of PCNet-C via learning partial completion enables full completion of the instance content at test time.
3.2 Dual-Completion for Ordering Recovery
The target ordering graph is composed of pair-wise occlusion relationships between all neighboring instance pairs. A neighboring instance pair is defined as two instances whose modal masks are connected, thus one of them possibly occludes the other. As shown in Fig. 4, given a pair of neighboring instances and , we first regard ’s modal mask as the target to complete. serves as the eraser to obtain the increment of , i.e., . Symmetrically, we also obtain the increment of conditioned on , i.e., . The instance gaining a larger increment in partial completion is supposed to be the ‘‘occludee’’. Hence, we infer the order between and via comparing their incremental area, as follows:
where indicates that occludes . If and are not neighboring, . Note that in practice the probability of is zero, thus does not need to be specifically considered here. Performing Dual-Completion for all neighboring pairs provides us the scene occlusion ordering, which can be represented as a directed graph as shown in Fig. 2. The nodes in the graph represent objects, while edges indicate the directions of occlusion between neighboring objects. Note that it is not necessarily to be acyclic, as shown in Fig. 7.
3.3 Amodal and Content Completion
Ordering-Grounded Amodal Completion. We can perform ordering-grounded amodal completion after estimating the ordering graph.
Suppose we need to complete an instance
We can perform ordering-grounded amodal completion after estimating the ordering graph. Suppose we need to complete an instance, we first find all ancestors of in the graph as the ‘‘occluders’’ of this instance via breadth-first searching (BFS). Since the graph is not necessarily to be acyclic, we adapt the BFS algorithm accordingly. Interestingly, we find that the trained PCNet-M is generalizable to use the union of all ancestors as the eraser. Hence, we do not need to iterate the ancestors and apply PCNet-M to partially complete step by step. Instead, we perform amodal completion in one step conditioned on the union of all ancestors’ modal masks. Denoting the ancestors of as , we perform amodal completion as follows:
Amodal-Constrained Content Completion. In previous steps, we obtain the occlusion ordering graph and the predicted amodal mask of each instance. Next, we complete the occluded content of them. As shown in Fig. 5 (b), the intersection of predicted amodal mask and the ancestors indicates the missing part of , regarded as the eraser for PCNet-C. Then we apply a trained PCNet-C to fill in the content as follows:
where is the decomposed content of from the scene. For background contents, we use the union of all foreground instances as the eraser. Different from image inpainting that is unaware of occlusion, content completion is performed on the estimated occluded regions.
We now evaluate our method in various applications including ordering recovery, amodal completion, amodal instance segmentation, and scene manipulation. The implementation details and more qualitative results can be found in the supplementary materials.
Datasets. 1) KINS , originated from KITTI , is a large-scale traffic dataset with annotated modal and amodal masks of instances. PCNets are trained on the training split (7,474 images, 95,311 instances) with modal annotations. We test our de-occlusion framework on the testing split (7,517 images, 92,492 instances). 2) COCOA  is a subset of COCO2014  while annotated with pair-wise ordering, modal, and amodal masks. We train PCNets on the training split (2,500 images, 22,163 instances) using modal annotations and test on the validation split (1,323 images, 12,753 instances). The categories of instance are unavailable for this dataset. Hence, we set the category id constantly as 1 in training PCNets for this dataset.
4.1 Comparison Results
Ordering Recovery. We report ordering recovery performance on COCOA and KINS in Table 1. We reproduced the OrderNet proposed in  to obtain the supervised results. Baselines include sorting bordered instance pairs by Area111We optimize this heuristic depending on each dataset – a larger instance is treated as a front object for KINS, and opposite for COCOA., Y-axis (instance closer to image bottom in front), and Convex prior. For baseline Convex, we compute convex hull on modal masks to approximate amodal completion, and the object with more increments is regarded as the occludee. All baselines have been adjusted to achieve their respective best performances. On both benchmarks, our method achieves much higher accuracies than baselines, comparable to the supervised counterparts. An interesting case is shown in Fig. 7, where four objects are circularly overlapped. Since our ordering recovery algorithm recovers pair-wise ordering rather than sequential ordering, it is able to solve this case and recover the cyclic directed graph.
|method||gt order (train)||COCOA||KINS|
Amodal Completion. We first introduce the baselines. For the supervised method, amodal annotation is available. A UNet is trained to predict amodal masks from modal masks end-to-end. Raw means no completion is performed. Convex represents computing the convex hull of the modal mask as the amodal mask. Since the convex hull usually leads to over-completion, i.e., extending the visible mask, we improve this baseline by using predicted order to refine the convex hull, constituting a stronger baseline: ConvexR. It performs pretty well for naturally convex objects. Ours (NOG) represents the non-ordering-grounded amodal completion that relies on our PCNet-M and regards all neighboring objects as the eraser rather than using occlusion ordering to search the ancestors. Ours (OG) is our ordering-grounded amodal completion method.
We evaluate amodal completion on ground truth modal masks, as shown in Table 2. Our method surpasses the baseline approaches and are comparable to the supervised counterpart. The comparison between OG and NOG shows the importance of ordering in amodal completion. As shown in Fig. 9, some of our results are potentially more natural than manual annotations.
Apart from using ground truth modal masks as the input in testing, we also verify the effectiveness of our approach with predicted modal masks as the input. Specifically, we train a UNet to predict modal masks from an image. In order to correctly match the modal and the corresponding ground truth amodal masks in evaluation, we use the bounding box as an additional input to this network. We predict the modal masks on the testing set, yielding 52.7% mAP to the ground truth modal masks. We use the predicted modal masks as the input to perform amodal completion. As shown in Table 3, our approach still achieves high performance, comparable to the supervised counterpart.
|method||amodal (train)||KINS %mIoU|
|Ann. source||modal (train)||amodal (train)||%mAP|
Label Conversation for Amodal instance segmentation. Amodal instance segmentation aims at detecting instances and predicting amodal masks from images simultaneously. With our approach, one can convert an existing dataset with modal annotations into the one with pseudo amodal annotations, thus allowing amodal instance segmentation network training without manual amodal annotations. This is achieved by training PCNet-M on the modal mask training split, and applying our amodal completion algorithm on the same training split to obtain the corresponding amodal masks, as shown in Fig. 8, To evaluate the quality of the pseudo amodal annotations, we train a standard Mask R-CNN  for amodal instance segmentation following the setting in . All baselines follow the same training protocol, except that the amodal annotations for training are different. As shown in Table 4, using our inferred amodal bounding boxes and masks, we achieve the same performance (mAP 29.3%) as the one using manual amodal annotations. Besides, our inferred amodal masks in the training set are highly consistent with the manual annotations (mIoU 95.22%). The results suggest a high applicability of our method for obtaining reliable pseudo amodal mask annotations, relieving burdens of manual annotation on large-scale instance-level datasets.
4.2 Application on Scene Manipulation
Our scene de-occlusion framework allows us to decompose a scene into the background and isolated completed objects, along with an occlusion ordering graph. Therefore, manipulating scenes by controlling order and positions is made possible. Fig. 10 shows scene synthesis by controlling order only. Fig. 11 shows more manipulation cases, indicating that our de-occlusion framework, though trained without any extra information compared to the baseline, enables high-quality occlusion-aware manipulation.
To summarize, we have proposed a unified scene de-occlusion framework equipped with self-supervised PCNets trained without ordering or amodal annotations. The framework is applied in a progressive way to recover occlusion orderings, then perform amodal and content completion. It achieves comparable performances to the fully-supervised counterparts on real-world datasets. It is applicable to convert existing modal annotations to amodal annotations. Quantitative results show their equivalent efficacy to manual annotations. Furthermore, our framework enables high-quality occlusion-aware scene manipulation, providing a new dimension for image editing.
Acknowledgement: This work is supported by the SenseTime-NTU Collaboration Project, Collaborative Research grant from SenseTime Group (CUHK Agreement No. TS1610626 & No. TS1712093), Singapore MOE AcRF Tier 1 (2018-T1-002-056), NTU SUG, and NTU NAP.
-  (2019) Hybrid task cascade for instance segmentation. In CVPR, pp. 4974–4983. Cited by: §1, §2.
-  (2018-06) Optimizing video object detection via a scale-time lattice. In CVPR, Cited by: §1.
-  (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI 40 (4), pp. 834–848. Cited by: §1, §2.
-  (2016) Instance-sensitive fully convolutional networks. In ECCV, pp. 534–549. Cited by: §1, §2.
-  (2018) Segan: segmenting and generating the invisible. In CVPR, pp. 6144–6153. Cited by: §1, §2.
-  (2010) Cascade object detection with deformable part models. In CVPR, pp. 2241–2248. Cited by: §1.
-  (2019) Learning to see the invisible: end-to-end trainable amodal instance segmentation. In WACV, pp. 1328–1336. Cited by: §1, §2.
-  (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §1, §4.
-  (2017) Mask r-cnn. In ICCV, Cited by: §1, §2, §4.1.
-  (2007) Recovering occlusion boundaries from a single image. In ICCV, Cited by: §2.
-  (2019) SAIL-vos: semantic amodal instance level video object segmentation-a synthetic dataset and baselines. In CVPR, pp. 3105–3115. Cited by: §1, §2, §2.
-  (1979) Organization in vision: essays on gestalt perception. Praeger Publishers. Cited by: §1.
-  (2015) Amodal completion and size constancy in natural scenes. In ICCV, pp. 127–135. Cited by: §2.
-  (2003) Euler spiral for shape completion. IJCV 54 (1-3), pp. 159–182. Cited by: §2.
-  (2019) Learning instance occlusion for panoptic segmentation. arXiv preprint arXiv:1906.05896. Cited by: §2.
-  (1999) Gestalt isomorphism and the quantification of spatial perception. Gestalt theory 21, pp. 122–139. Cited by: §1.
-  (2016) Amodal instance segmentation. In ECCV, pp. 677–693. Cited by: §2.
-  (2016) A computational model of topological and geometric recovery for visual curve completion. Computational Visual Media 2 (4), pp. 329–342. Cited by: §2.
-  (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1, §4.
-  (2018) Image inpainting for irregular holes using partial convolutions. In ECCV, Cited by: Appendix A, Appendix A.
-  (2019) An end-to-end network for panoptic segmentation. In CVPR, pp. 6172–6181. Cited by: §2.
-  (2015) Semantic image segmentation via deep parsing network. In ICCV, Cited by: §1.
-  (1999) Vision science: photons to phenomenology. MIT press. Cited by: §1.
-  (2019) Seeing behind things: extending semantic segmentation to occluded regions. arXiv preprint arXiv:1906.02885. Cited by: §2.
-  (2019) Amodal instance segmentation with kins dataset. In CVPR, pp. 3014–3023. Cited by: §1, §2, §2, §4.1, Table 4, §4.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, Cited by: §1.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: Appendix A.
-  (2014) A contour completion model for augmenting surface reconstructions. In ECCV, Cited by: §2.
-  (2014) Scene parsing with object instances and occlusion ordering. In CVPR, pp. 3748–3755. Cited by: §2.
-  (2019-10) CARAFE: content-aware reassembly of features. In ICCV, Cited by: §1.
-  (2019) Region proposal by guided anchoring. In CVPR, Cited by: §1.
-  (2017) Neural scene de-rendering. In CVPR, Cited by: §2.
-  (2017) Pyramid scene parsing network. In CVPR, pp. 2881–2890. Cited by: §1, §2.
-  (2017) Semantic amodal segmentation. In CVPR, pp. 1464–1472. Cited by: §1, §2, §2, §4.1, Table 1, §4.
Appendix A Implementation Details
In our experiments, the backbone for PCNet-M is UNet  with a widening factor 2, and that for PCNet-C is a UNet equipped with partial convolution layers ; while note that PCNets do not have restrictions on backbone architectures. For both PCNets, the image or mask patches centering on an object are cropped by an adaptive square and resized to 256x256 as inputs.
For COCOA, the PCNet-M is trained using SGD for 56K iterations with an initial learning rate 0.001 decayed at iterations 32K and 48K by 0.1. For KINS, we stop the training process earlier at 32K. The batch size is 256 distributed on 8 GPUs (GTX 1080 TI). The hyper-parameter that balances the two cases in training PCNet-M is set to . In current experiments, we do not use RGB as an input to PCNet-M, since we empirically find that introducing RGB through concatenation makes little differences. It is probably because for these two datasets, modal masks are informative enough for training; while we believe in more complicated scenes, RGB will exert more influence if introduced in a better way.
For PCNet-C, we modify the UNet to take in the concatenation of image and modal mask as the input.
Apart from the losses in  , we add an extra adversarial loss for optimization.
The discriminator is a stack of 5 convolution layers with spectral normalization and leaky ReLU (slope=
, we add an extra adversarial loss for optimization. The discriminator is a stack of 5 convolution layers with spectral normalization and leaky ReLU (slope=). The PCNet-C is fine-tuned for 450K iterations with a constant learning rate from a pre-trained inpainting network . We adapt the pre-trained weights to be compatible for taking in the additional modal mask.
Appendix B Discussions
b.1 Analysis on varying occlusion ratio.
Fig. 12 show the amodal completion performances of different approaches under varying ratios of occluded area. Naturally, larger occlusion ratios result in lower performances. Under high occlusion ratios, our full method (Ours (OG)) surpasses the baseline methods by a large margin.
b.2 Does it support mutual occlusion?
As a drawback, our approach does not support cases where two objects are mutually occluded as shown in 13, because our approach focuses on object-level de-occlusion. For mutual occlusions, the ordering graph cannot be defined, therefore fine-grained boundary-level de-occlusion is required. It leaves an open question to scene de-occlusion problem. Nonetheless, our approach works well if more than two objects are cyclically occluded as shown in Fig. 7 in the main paper.
b.3 Will case 2 mislead PCNet-M?
As shown in Fig.14, one may have concerns that in case (a-2) when not-to-complete strategy is applied, the boundary between and might include a contour shown in green where is occluded by a real object, namely . Therefore, it might teach PCNet-M a wrong lesson if the yellow shaded region is taught not to be filled.
Here we explain why it will not teach PCNet-M the wrong lesson. First of all, PCNet-M learns to complete or not to complete the target object conditioned on a surrogate occluder. As shown in Fig. 14, as PCNet-M is taught to complete in (a-1) while not to complete in (a-2), it has to discover cues indicating that is below in (a-1) and is above in (a-2). The cues might include the shape of two objects, the shape of common boundary, junctions, etc. In testing time, e.g. in (b) when regarding the real as the condition, it is easy for PCNet-M to tell that is above from those cues. Therefore PCNet-M actually inclines to case 1, when will be completed conditioned on .
Then which case does this not-to-complete strategy affect? The case in (c) shares very similar occlusion patterns with (a-2), especially in the upper right part of the common boundary, showing strong cues that is above , in which case PCNet-M will not complete as expected. However, case (c) is abnormal and unlikely to exist in the real world. The situation where the not-to-complete strategy really takes effect lies in case (d). In this case when strong cues indicate that is above , the PCNet-M is taught not to extend across boundary to invade .
Appendix C Visualization
As shown in Fig. 15, our approach enables us to freely adjust scene spatial configurations to re-compose new scenes. The quality could be further improved with the advance of image inpainting, since the PCNet-C shares a similar network architecture and training strategy to image inpainting.