pt2pc
PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions
view repo
3D generative shape modeling is a fundamental research area in computer vision and interactive computer graphics, with many real-world applications. This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation. In order to learn such a conditional shape generation procedure in an end-to-end fashion, we propose a conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles the structural and geometric factors. The proposed model incorporates the part tree condition into the architecture design by passing messages top-down and bottom-up along the part tree hierarchy. Experimental results and user study demonstrate the strengths of our method in generating perceptually plausible and diverse 3D point clouds, given the part tree condition. We also propose a novel structural measure for evaluating if the generated shape point clouds satisfy the part tree conditions.
READ FULL TEXT VIEW PDFPT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions
Learning to recognize, reconstruct and synthesize 3D objects is a fundamental goal of computer vision [46, 47]. Although advances have been made in understanding the structural and semantic aspects (e.g., key-points [66], shape attributes [13], and part segments [55, 51]
) of 3D objects using deep neural networks, the inverse problem of re-synthesizing objects from intermediate representations reflecting structural and semantic aspects is relatively under-explored. Extant work on holistic 3D shape generation does not explicitly consider part semantic and structural information in the generation process. In contrast, a structure-conditioned 3D generative model enables many real-world applications in computer vision and graphics, ranging from generative shape completion
[26, 11], interactive modeling [31, 95] and correspondence-based shape retrieval [14, 15, 74]. From the perspective of graphics design, 3D object creation involves both coarse-level structural design as well as fine-grained geometric generation [39, 54, 30]. Specifically, structural design choices reflect the discrete (e.g. number of chair legs to support the seat) and compositional (e.g. four chair legs connected by horizontal bars form the base) aspects of 3D shapes, while geometric design focuses on the continuous nature of 3D shapes (e.g. a novel shape can be created by interpolating between two similar shapes). Indeed, understanding and modeling a 3D object shape is challenging as generally
structural and geometric factors are highly entangled.In this paper, we are interested in generating 3D point clouds with geometric variations conditioned on a structural shape description (see Figure 1). For example, we may like to generate multiple 3D geometries with different styles satisfying the description “a chair with a back and seat supported by four legs and two arms”. This aligns well with traditional product design protocols, where a conceptual design is produced first to constrain the overall product structure, before the actual geometry is determined. Naturally, such conditioning reduces ambiguities due to structural and geometric entanglement and encourages more fine-grained and controllable 3D shape generation – thus supporting many real-world applications, including structure-conditioned shape design [51, 49] and structure-aware shape re-synthesis [14]. To capture both discrete and compositional aspects, we represent the structural condition using a symbolic part-tree representation [51]. More specifically, each shape instance is segmented into the shape parts organized in a hierarchical tree structure, where each part node in the tree corresponds to the 3D shape part of the instance. Based on these ideas, we propose to learn a 3D point cloud generation model, conditioned on a provided symbolic part-tree.
We approach the learning problem with a novel conditional generative adversarial network “part tree”-to-“point cloud” (PT2PC), where the part-tree hierarchy is incorporated into both the generator and discriminator architectures. More specifically, our generator first encodes the part tree template feature using semantic and structural information for each part node in a bottom-up
fashion along the tree hierarchy. To generate 3D point clouds with diverse geometry, we obtain a random variable capturing the global geometry information at the root node and propagate such geometric information to each part node in a
top-down fashion along the part tree. The final point clouds are generated by aggregating the point clouds decoded at each leaf node representing its corresponding fine-grained semantic part. Our discriminator computes per-part features at the leaf level, propagates the information in a bottom-up fashion along the tree hierarchy until the root node and finally produces a score judging if the generated shape geometry looks realistic and the shape structure satisfies the input condition. Figure 1 shows our task input and output.We evaluate the proposed model on four major semantic classes from the PartNet dataset. To justify the merits of our tree-structure architecture for both the generator and discriminator, we compare with two conditional GAN baselines. Both quantitative and qualitative results demonstrate clear advantages of our design in terms of global shape quality, part shape quality, and shape diversity, under both seen and unseen templates as the condition. Results on human evaluation agree with our observations in the experiments and further strengthen our claims. Additionally, we propose a novel hierarchical part instance segmentation method that is able to segment an input point cloud without any part labels into a symbolic part tree. This provides us a metric to evaluate how well our generated shape geometry satisfy the part tree conditions.
In summary, our contributions are:
we formulate the novel task of part-tree conditioned point cloud generation;
we propose a conditional GAN method, PT2PC, that generates realistic and diverse point cloud shapes given symbolic part tree conditions;
we demonstrate superior performance both quantitatively and qualitatively under standard GAN metrics and a user study, comparing against two baseline conditional-GAN methods;
we propose a novel point cloud structural evaluation metric for evaluating if the generated point clouds satisfy the part tree conditions.
We review related works on 3D generative shape modeling, part-based shape modeling and structure-conditioned content generation.
Reconstructing and synthesizing 3D shapes is a popular research topic in computer vision and graphics. In the past few years, tremendous progresses have been made in generating 3D shapes represented as voxel grids [9, 20, 79, 80, 83, 87, 6], point clouds [1, 11, 16, 23, 32, 22, 89, 88, 98], and surface meshes [62, 21, 37, 43, 45, 68, 72, 75, 100, 21] using deep neural networks. Representing the target shapes as 3D voxel occupancy grids, Wu et al. [83]
introduced the first 3D deep generative model using stacked Restricted Boltzmann Machines (RBMs) with volumetric convolutions. Building upon the 3D volumetric convolutions and generative adversarial networks (GANs), Wu
et al. [80] developed a 3D-GAN model in synthesizing 3D voxel grids from the low-dimensional latent space. Despite the success in generating 3D voxels, the study of GANs in 3D point clouds generation is relatively under-explored. Unlike 3D voxel grids, point clouds representation is a collection of unordered points irregularly distributed in the 3D space, which makes the minimax optimization very challenging [41, 1]. Achlioptas et al. [1] proposed a latent-GAN approach that conducts minimax optimization on the shape feature space which outperforms the raw-GAN operating on the raw point clouds. To better capture the local geometric structure of point clouds, Valsesia et al. [73] proposed a graph-based generator that dynamically computes the neighbors on the graph using feature similarity during GAN training. However, the computation cost is quadratic to the number of nodes at each layer, which makes it difficult to scale up. Very recently, Shu et al. [61]proposed Tree-GAN with a novel tree-structured graph convolutional neural network as the generator. Unlike these shape point cloud GAN works that generate shapes without explicit part semantic and structural constraints, we study the task of conditional point cloud generation. Given a symbolic part tree as a condition, we learn to generate shapes with variations that satisfy the conditional part structure.
There is a line of research on understanding shapes by their semantic parts and structures. Previous works study 3D shape part segmentation [7, 36, 91, 34, 55, 92, 77, 51, 10], generate shape box abstraction [71, 99, 53, 64], shape template fitting [40, 17, 19], generate shapes by explicitly modeling parts and structures [35, 43, 65, 76, 82, 69, 49, 81, 18, 60, 42], or edit shape by parts [12, 95, 50]. We refer to the survey papers [85, 48] for more related works. Shape parts have hierarchical structures [78, 74, 51]. Yi et al. [90] learns consistent part hierarchy from noisy online tagged shape parts. GRASS [43] propose binary part trees to generate novel shapes. A follow-up work [94] learns to segment shapes into the binary part hierarchy. PartNet [51] proposes a large-scale 3D model dataset with hierarchical part segmentation. Using PartNet, recent works such as StructureNet [49] and StructEdit [50] learns to generate and edit shapes explicitly following the pre-defined part hierarchy. We use the tree hierarchy defined in PartNet [51] and propose a new task PT2PC that learns to generate point cloud shapes with variations satisfying a symbolic part tree condition.
Understanding the 3D visual world, parsing the geometric and structural properties of 3D primitives (e.g. objects in the scene or parts of an object) and their relationships is at the core of computer vision [25, 84, 33, 70]. Many works learns to synthesize high-quality images from text descriptions [38, 58, 57, 96, 93, 44, 67], semantic attributes [86, 8], scene-graph representations [33, 3], and rough object layouts [29, 28, 97, 52]. There are also works to generate 3D content with certain input conditions. Chang et al. [5, 4] learns to generate 3D scenes from text. Chen et al. [6] studied how to generate 3D voxel shapes from a sentence condition. StructEdit [50] learns to generate structural shape variations conditioned on an input source shape. Our work introduces a conditional Generative Adversarial Network that generates shape point clouds conditioned on an input symbolic part tree structure.
In this work, we propose PT2PC, a conditional GAN (c-GAN) that learns a mapping from a given symbolic part tree
and a random noise vector
to a 3D shape point cloud composed of part point clouds for the leaf nodes of the conditional part tree. Different from previous point cloud GAN works that generate shape point clouds in a holistic manner, we choose a part-based generation philosophy. Simply taking the union of the generated part point clouds renders the final shape point cloud output. The symbolic part tree defines a part hierarchy that describes how a shape is organized by various part instances at different granularities. Figure 2 shows a part tree of an exemplar chair. We propose novel part-based conditional point cloud generator and discriminator conditioned on the symbolic part tree input . We organically incorporate the tree condition into the architectures of our proposed conditional generator and discriminator by leveraging Recursive Neural Networks (RvNN) [63]. Moreover, different from holistic point cloud GANs [1, 73, 61] that generate and discriminate against shape point clouds only, our proposed generator can generate points along with their part labels, and our discriminator consumes both the generated point cloud geometry and part semantic labels.We follow the semantic part hierarchy defined in PartNet [51]. Every PartNet shape instance (e.g. a chair) is annotated with a hierarchical part segmentation that provides both coarse-level parts (e.g. chair base, chair back) and parts at fine-grained levels (e.g. chair leg, chair back bar). Figure 2 shows the ground-truth part hierarchy of an exemplar chair.
A symbolic part tree is defined as , where represents a set of part instances and represents an directed edge set of the part parent-children relationships . In , each part instance is composed of two components: a semantic label (e.g. chair seat, chair back), and a part instance identifier (e.g. the first leg, the second leg), both of which are represented as one-hot vectors. The set of part semantic labels are pre-defined in PartNet. In , each edge indicates is the parent node of . The set defines all children part instances of a node . We denote a special part node to be the root node of the part tree , with the semantic label and the instance identifier . The leaf node of the symbolic part tree has no children, namely, .
Our conditional generator takes a random Gaussian variable and a symbolic part tree condition as inputs and outputs a set of part point clouds where is a part point cloud in the shape space representing the leaf node part . Namely,
(1) |
The generator is composed of three network modules: a part tree encoder , a part tree feature decoder and a part point cloud decoder . First, the part tree encoder embeds the nodes of into compact features hierarchically from the leaf nodes to the root node for every node . Then, taking in both the random variable and the hierarchy of part features , the part tree feature decoder hierarchically decodes the part feature in the opposite direction, from the root node to the leaf nodes, and finally produces part feature for every leaf node . Finally, the part point cloud decoder transforms the leaf node features into 3D part point clouds in the shape space.
Our part tree decoder is inspired by Recursive Neural Network (RvNN) [63, 49], which propagates the random noise from the root node to the leaf nodes in a top-down fashion. We further note that, at each step of the part feature decoding, the parent node needs to know the global structure context in order to propagate coherent signals to all of its children so that the generated part point clouds can form a valid shape. Thus, we introduce the part tree encoder as a bottom-up module to summarize the part tree structure and provide a global context feature for the feature decoding in .
For a given symbolic part tree , we encode the nodes of the part tree starting from the leaf nodes and propagate the messages to the parent node of the encoded nodes until the root node gets encoded. The message propagation is performed in a bottom-up fashion. As shown in Eq. 3.2, each node takes the node feature , the semantic label and the part instance identifier from all its children , aggregate the information and compute its node feature using a learned function , and then propagates a message to its parent node. We initialize for every leaf node . Then,
(2) |
where means a concatenation of the inputs. is implemented as a small PointNet[55] to enforce the permutation invariance between children nodes. We first use a fully-connected layer to embed each into a
-dim feature, then perform a max-pooling over
features at children nodes to obtain an aggregated feature, and finally push the aggregated feature through another fully-connected layer to obtain the final parent node feature . We use leaky ReLUas the activation function in our fully-connected layers.
Taking in the random variable and encoded node features , we hierarchically decode the part features from the root node to the leaf nodes in a top-down fashion along the given part tree structure . As shown in Eq. 3.2, for every part , we learn a shared function transforming the concatenation of its own features () and the decoded feature from its parent node into part feature . For the root node, we use random noise z to replace parent node feature.
(3) |
We implement as a two-layer MLP with leaky ReLU as the activation function. The output feature size is 256.
Given the part features of all the leaf nodes , our point cloud decoder transforms each individual feature into a 3D part point cloud in the shape space for every , as shown in Eq. 3.2. To get the final shape point cloud, we down-samples the union of all part point clouds . We generate the same number points for all the parts.
x | (4) |
is designed to deform a fixed surface point cloud of a unit cube into our target part point cloud based on its input f, inspired by the shape decoder introduced in Groueix et al. [23]. We uniformly sample a 1000-size point cloud from the surface of a unit cube to form . Then, for each point in , we concatenate its XYZ coordinate with the feature , push it through a using leaky ReLU, and finally obtain an XYZ coordinate for a point on our target point cloud. Finally, we use Furthest Point Sampling (FPS) for our Downsample operation to obtain shape point cloud .
Compared to existing works [1, 73, 61] that generate shape point clouds as a whole, the key difference here is that our point cloud decoder generates part point clouds for every leaf node in the part tree separately, but in a manner aware of the inter-part structure and relationships. Another big advantage is that we get the semantic label of each generated part point cloud. Furthermore, we observe that the holistic point cloud generators usually suffer from non-uniform point distribution. The generators tend to allocate way more points to bulky parts (e.g., chair back and chair seat) while only generating sparse points for small parts parts with thin geometry (e.g., chair wheel, chair back bar). Since our generates the same number of points for each part and then performs global down-sampling, we can generate shape point clouds with fine structures and appropriate point density for all the parts.
Our conditional discriminator receives a generated sample or a true data sample, composed of a set of part point clouds , and outputs a scalar based on the tree condition . Following the WGAN-gp [2, 24], is learned to be a 1-Lipschitz function of and its output depicts the realness of the sample.
Since the input always contains part point clouds for every leaf node part instances in the symbolic part tree , our discriminator mainly focus on judging the geometry of each part point clouds along with the whole shape point clouds assembled from the parts. This is to say, the discriminator should tell if each part point cloud is realistic and plausible regarding its part semantics; in addition, the discriminator needs to look at the the spatial arrangement of the part point clouds, judge whether it follows a realistic structure specified by the part tree , e.g. connected parts need to contact each other and some parts may exhibit certain kind of symmetry; finally, the discriminator should judge whether the generated part point clouds form a realistic shape point cloud.
To address the requirements above, our discriminator leverages two modules: a structure-aware part point cloud discriminator , and a holistic shape point cloud discriminator , where . takes as input the part tree condition and the generated set of part point clouds and outputs a scalar regarding the tree-conditioned generation quality. only takes the down-sampled shape point cloud as input and outputs a scalar regarding the unconditioned shape quality. As shown in Eq.3.3, the final output of our discriminator is the sum of the two discriminators.
(5) |
For the structure-aware part point cloud discriminator , we constitute it using three network components: a part point cloud encoder , a tree-based feature encoder , and a scoring network . First, the point cloud encoder encodes the part point cloud into a part feature for each leaf node . Then, taking in the part features at leaf level , the tree-based feature encoder recursively propagate the part features h along with the part semantics s to the parent nodes starting from the leaf nodes and finally reaching the root node, in a bottom-up fashion. Finally, a scoring function outputs a score for the shape generation quality. For the holistic shape point cloud discriminator , it is simply composed of a PointNet encoder and a scoring network which outputs a scalar .
Both and use vanilla PointNet [55]
architecture without spatial transformer layers and batch normalization layers. For
, we learns a function to extract a part geometry feature for each part point cloud .(6) |
is implemented as a four-layer to process each point individually followed by a max-pooling. Similarly, takes a shape point cloud as input and outputs a global shape feature .
(7) |
Similar to part tree feature encoder , the tree feature encoder learns an aggregation function that transforms features from children nodes into node features, as shown in Eq. 8. By leveraging the tree structure specified by in its architecture, the module enforces the structure-awareness of . In a bottom-up fashion, the features propagate from the leaf level finally to the root yielding , according to Eq.8.
(8) |
To implement , we extract a latent -dim feature after applying a fully-connected layer over each input , perform max-pooling over all children and finally push it through another fully-connected layer to obtain . We use the leaky ReLU activation functions for both layers.
Note that the key difference between the part feature encoder and in our generator is that doesn’t need part instance identifiers since the feature of a parent node needs to be permutation invariant to the order of its children node for both generated and real samples.
After obtaining the structure-aware root feature and the holistic PointNet feature , we compute and as follows.
(9) |
Both scoring functions are implemented as a simple fully-connected layer with no activation function.
We follow WGAN-gp [2, 24] for training our PT2PC conditional generator and discriminator . The objective function is defined in Eq. 10.
(10) |
where we interpolate each pair of corresponding part point clouds from a real set and a fake set to get , as shown in below:
(11) |
where is a random interpolation coefficient always remaining same for all parts. We iteratively train the generator and discriminator with . The loss weight for gradient penalty term is . We choose and in our experiments. And, we assume the maximal children per parent node to be 10.
We evaluate our proposed PT2PC on the PartNet [51] dataset and compare to two baseline c-GAN methods. We show superior performance on standard point cloud GAN metrics and a novel structural metric evaluating how well the generated point clouds satisfy the input part tree conditions. We also conduct a user study which confirms our strengths over baseline methods.
Category | #Shapes | #Part Trees | ||||
---|---|---|---|---|---|---|
Total | Train | Test | Total | Train | Test | |
Chair | 4871 | 3848 | 1023 | 2197 | 1648 | 549 |
Table | 5099 | 4146 | 953 | 1267 | 925 | 342 |
Cabinet | 846 | 606 | 240 | 619 | 470 | 149 |
Lamp | 802 | 569 | 233 | 302 | 224 | 78 |
We use the PartNet [51] dataset following StructureNet [49]. PartNet contains fine-grained and hierarchical instance-level semantic part annotations including 573,585 part instances over 26,671 3D models covering 24 categories. In this paper, we use the four largest categories that contain diverse part structures: chairs, tables, cabinets and lamps. Following StructureNet [49], we assume there are at maximum 10 children for every parent node and remove the shapes containing unlabeled parts for the canonical sets of part semantics in PartNet [51]. Table 1 summarizes data statistics and the train/test splits. We split by part trees with a ratio 75%:25%. See Table 1 for more details. We observe that most part trees (e.g. 1,787 out of 2,197 for chairs) have only one real data point in PartNet, which posts challenges to generate shapes with geometry variations.
We compare to two vanilla versions of conditional GAN methods as baselines.
Whole-shape Vanilla c-GAN (B-Whole): the method uses a Multiple-layer Perception (MLP) as the generator and the holistic shape point cloud discriminator ;
Part-based Vanilla c-GAN (B-Part): the method uses exactly the same proposed generator as in our method and a holistic shape point cloud discriminator .
One can think B-Part as an ablated version of our proposed method, without the structural discriminator . The B-Whole method further removes our part-based generator and generates whole shape point clouds in one shot, similar to previous works [1, 73, 61]. We implement similar to used as part of our discriminator. It uses a vanilla-version of PointNet [55] to extract a global geometry feature for a input shape point cloud. Additionally, to make it be aware of the input part tree condition , we re-purpose the proposed part tree feature encoder network in our generator to recursively compute a root node feature summarizing the entire part tree structural information. We make conditional on the extracted root node feature. For , we follow Achlioptas et al. [1] and design a five-layer MLP with sizes , , , , that finally produces a point cloud of size . We use leaky ReLU as activation functions except for the final output layer. We also condition
on the root feature extracted from the template feature encoder.
We report standard point cloud GAN metrics, including coverage, diversity [1], and Frechét Point-cloud Distance (FPD) [61]. Note that coverage and diversity originally measure the distance between shape point clouds, which is, more or less, structure-unaware. We introduce two structure-aware metrics, part coverage and part diversity adopting the original ones by evaluating the average distance between corresponding parts of the two shapes. In addition, we devise a novel perceptual structure-aware metric HierInsSeg that measures the part tree edit distance leveraging deep neural networks. See supplementary for more details.
The HierInsSeg algorithm performs hierarchical part instance segmentation on the input shape point cloud and outputs a symbolic part tree depicting its part structure. Then we compute a tree-editing distance between this part tree prediction and the part tree used as the generation condition. For each part tree, we conditionally generate 100 shape point clouds and compute the mean tree-editing distance. To get theHierInsSeg score, we simply average the mean tree-editing distances from all part trees. In Table 2 (the GT rows), we present the HierInsSeg scores operating on the real shape point clouds to provide a upper bound for the performance. Qualitative and quantitative results show that the proposed HierInsSeg algorithm is effective on judging if the generated shape point cloud satisfies the part tree condition. See supplementary for more details.
Method | Train | Test | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S-Cov | P-Cov | S-Div | P-Div | FPD | HIS | S-Cov | P-Cov | S-Div | P-Div | FPD | HIS | ||
Chair | B-Whole | 0.13 | – | 0.14 | – | 7.32 | 0.57 | 0.13 | – | 0.13 | – | 10.88 | 0.57 |
B-Part | 0.14 | 0.41 | 0.14 | 0.06 | 7.17 | 0.58 | 0.15 | 0.41 | 0.14 | 0.06 | 11.10 | 0.58 | |
Ours | 0.13 | 0.06 | 0.18 | 0.07 | 6.64 | 0.48 | 0.14 | 0.07 | 0.18 | 0.07 | 10.69 | 0.48 | |
GT | 0.30 | 0.31 | |||||||||||
Table | B-Whole | 0.19 | – | 0.14 | – | 13.02 | 1.04 | 0.21 | – | 0.14 | – | 20.63 | 1.02 |
B-Part | 0.20 | 0.60 | 0.15 | 0.09 | 6.45 | 1.02 | 0.21 | 0.60 | 0.15 | 0.09 | 16.92 | 0.99 | |
Ours | 0.21 | 0.11 | 0.18 | 0.09 | 5.58 | 0.93 | 0.23 | 0.17 | 0.17 | 0.09 | 15.33 | 0.89 | |
GT | 0.62 | 0.64 | |||||||||||
Cabinet | B-Whole | 0.15 | – | 0.09 | – | 16.38 | 0.90 | 0.17 | – | 0.08 | – | 22.90 | 0.86 |
B-Part | 0.30 | 0.84 | 0.03 | 0.01 | 3.25 | 0.64 | 0.43 | 0.84 | 0.03 | 0.01 | 24.29 | 0.81 | |
Ours | 0.13 | 0.08 | 0.13 | 0.02 | 4.13 | 0.52 | 0.24 | 0.18 | 0.05 | 0.02 | 17.73 | 0.57 | |
GT | 0.32 | 0.35 | |||||||||||
Lamp | B-Whole | 0.38 | – | 0.08 | – | 17.87 | 1.00 | 0.38 | – | 0.09 | – | 86.96 | 0.96 |
B-Part | 0.28 | 0.73 | 0.09 | 0.03 | 6.52 | 0.78 | 0.43 | 0.70 | 0.09 | 0.03 | 94.66 | 0.88 | |
Ours | 0.32 | 0.04 | 0.11 | 0.05 | 5.71 | 0.68 | 0.41 | 0.19 | 0.12 | 0.05 | 80.55 | 0.83 | |
GT | 0.51 | 0.57 | |||||||||||
Chair | Ours-W | 0.14 | 0.07 | 0.22 | 0.08 | 10.60 | 0.51 | 0.15 | 0.07 | 0.21 | 0.08 | 13.52 | 0.49 |
Abla. | Ours | 0.13 | 0.06 | 0.18 | 0.07 | 6.64 | 0.48 | 0.14 | 0.07 | 0.18 | 0.07 | 10.69 | 0.48 |
We train our proposed PT2PC method and the two vanilla c-GAN baselines on the training splits of the four PartNet categories. The part trees in the test splits are unseen during the training time. Table 2 summarizes the quantitative evaluations on both training and testing splits. Our HierInsSeg scores are always the best as we explicitly generate part point clouds and hence render clearer part structures. Moreover, we win most of the FPD scores, showing that our method can generate realistic point cloud shapes. Finally, we find that our part-based generative model usually provides higher shape diversity as a result of part compositionality.
Figure 4 shows qualitative comparisons of our method to the two baseline methods. One can clearly observe that B-Whole produces holistically reasonable shape geometry but with unclear part structures, which explains why it achieves decent shape coverage scores but fails to match our method under FPD and HIS. For B-Part, it fails severely for chair, table and cabinet examples that it does not assign clear roles for the parts and the generated part point clouds are overlaid with each other, which explains the high part coverage scores in Table 2. Obviously, our method generates shapes with clearer part structures and boundaries. We also see a reasonable amount of generation diversity even for part trees with only one real data in PartNet, thanks to the knowledge sharing among similar part tree and sub-tree structures when training a unified and conditional network. We also conduct an ablation study on chairs where we remove the holistic discriminator .
Train | Test | |||||
---|---|---|---|---|---|---|
Structure | Geometry | Overall | Structure | Geometry | Overall | |
B-Whole | 2.39 | 2.07 | 2.22 | 2.40 | 2.10 | 2.21 |
B-Part | 2.33 | 2.41 | 2.38 | 2.36 | 2.47 | 2.46 |
Ours | 1.29 | 1.51 | 1.40 | 1.24 | 1.43 | 1.33 |
Although we provide both Euclidean metrics (i.e. coverage and diversity scores) and perceptual metrics (i.e. FPD and the proposed HierInsSeg scores) for evaluating generation quality in Table 2, the true measure of success is human judgement of the generated shapes. For this reason we perform a user study to evaluate the generation quality on chair class. For each trial, we show users a part tree as the condition, 5 ground truth shapes as references, and 5 randomly generated shape point clouds for each of the three methods. We ask users to rank the methods regarding the following three aspects: 1) structure similarity to the given part tree; 2) geometry plausibility; 3) overall generation quality. For fair comparison, we randomize the order between the methods in all trials and only show the shape point clouds without part labels. In total, we collected 536 valid records from 54 users. In Table 3, we report the average ranking of the three methods. Our method significantly outperforms the other two baseline methods on all of the three aspects and on both train and test templates. Please refer to supplementary material for the user interface.
Our proposed PT2PC framework enables disentanglement of shape structure and geometry generation factors. We demonstrate the capability of exploring structure-preserving geometry variation and geometry-preserving structure variation using our method. Conditioned on the same symbolic part tree, our network is able to generate shape point clouds with geometric variations by simply changing the Gaussian random noise . On the other hand, if we fix the same noise , conditioned on different input part trees, we observe that PT2PC is able to produce geometrically similar but structurally different shapes. Figure 5 shows the generated shape point clouds from a set of Gaussian noises and a set of part trees . Each row shows shape structural interpolation results while sharing similar shape geometry, and every column presents geometric interpolation results conditioned on the same part tree structure.
We have proposed PT2PC, a conditional generative adversarial network (c-GAN) that generates point cloud shapes given a symbolic part-tree condition. The part tree input specifies a hierarchy of semantic part instances with their parent-children relationships. The conditional generator first encodes the part tree features in a bottom-up fashion and then recursively decodes part features top-down from a root-node Gaussian noise vector to the leaf nodes, where geometry is finally generated in point cloud format. The conditional discriminator consists of a structure-aware discriminator which recursively consumes the generated part tree with leaf node geometry in a bottom-up fashion, and a holistic discriminator that judges the shape point cloud as a whole. Extensive experiments and user study show our superior performance compared to two baseline c-GAN methods. We also propose a novel metric HierInsSeg to evaluate if the generated shape point clouds satisfy the part tree conditions. Future works may study incorporating more part relationships and extrapolating our method to unseen categories.
This research was supported by a Vannevar Bush Faculty Fellowship, grants from the Samsung GRO program and the Stanford SAIL Toyota Research Center, and gifts from Autodesk and Adobe.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pp. 2028–2038. Cited by: §2.Stargan: unified generative adversarial networks for multi-domain image-to-image translation
. InProceedings of the IEEE conference on computer vision and pattern recognition
, pp. 8789–8797. Cited by: §2.Analysis and synthesis of 3d shape families via deep-learned generative models of surfaces
. In Computer Graphics Forum, pp. 25–38. Cited by: §1.Grass: generative recursive autoencoders for shape structures
. ACM Transactions on Graphics (TOG) 36 (4), pp. 1–14. Cited by: §2, §2.The earth mover’s distance as a metric for image retrieval
. International journal of computer vision 40 (2), pp. 99–121. Cited by: Coverage Scores..Proceedings of the 28th international conference on machine learning (ICML-11)
, pp. 129–136. Cited by: §3.2, §3.Self-supervised learning of motion capture
. In Advances in Neural Information Processing Systems, pp. 5236–5246. Cited by: §2.3d-prnn: generating shape primitives with recurrent neural networks
. In Proceedings of the IEEE International Conference on Computer Vision, pp. 900–909. Cited by: §2.Three-d safari: learning to estimate zebra pose, shape, and texture from images” in the wild”
. In ICCV, Cited by: §2.In this section, we describe more details on the evaluation metrics: coverage scores, diversity scores, Frechét Point-cloud Distance and our proposed novel HierInsSeg score.
Conditioned on every part tree , the coverage score measures the average distance from each of the real shapes to the closest generated sample in .
(12) |
where includes all real data samples that satisfies . We randomly generate 100 point cloud shapes .
We introduce two variants of function Dist to measure the distance between two sets of part point clouds and .
(13) |
where EMD denotes the Earth Mover Distance [59, 11] between two point clouds and DownSample is Furthest Point Sampling (FPS). Here, is the solution to a linear sum assignment we compute over two sets of part point clouds and according to the part tree and part geometry.
We measure part coverage score and shape coverage score using and respectively for every part tree condition , and finally average over all part trees to obtain the final coverage scores. The shape coverage score measures the holistic shape distance which is less structure-aware, while the part coverage score treats all parts equally and is more suitable to evaluate our part-tree conditioned generation task.
A good point cloud GAN should generate shapes with variations. We generate 10 point clouds for each part tree condition and compute diversity scores under both distance functions and . Finally, we report the average part diversity score and shape diversity score across all part tree conditions.
(14) |
Shu et al. [61] introduced Frechét Point-cloud Distance (FPD) for evaluating the point cloud generation quality, inspired by the Frechét Inception Distance (FID) [27] commonly used for evaluating 2D image generation quality. A PointNet [55] is trained on ModelNet40 [83] for 3D shape classification and then FPD computes the real and fake feature distribution distance using the extracted point cloud global features from PointNet.
FPD jointly evaluates the generation quality, diversity and coverage. It is defined as
(15) |
where and are the mean vector and the covariance matrix of the features for the real data distribution and the generated one . The notation Tr refers to the matrix trace.
As most of the part trees in PartNet have only one or few real shapes, we cannot easily compute a stable real data mean and covariance matrix for each part tree, which usually requires hundreds or thousands of data points to compute. Thus, we have to compute FPD over all part tree conditions by randomly sampling a part tree condition from the dataset and generating one shape point cloud conditioned on it. In this paper, we generate 10,000 shapes for each evaluation.
We propose a novel HierInsSeg score, which is a structural metric that measures how well the generated shape point clouds satisfy the symbolic part tree conditions. The HierInsSeg algorithm performs hierarchical part instance segmentation on the input shape point cloud and outputs a symbolic part tree depicting its part structure. Then we compute a tree-editing distance between this part tree prediction and the part tree used as the generation condition. We perform a hierarchical Hungarian matching over the two symbolic part trees that matches according to the part semantics and the part subtree structures in a hierarchical top-down fashion. Any node mismatch in this procedure contributes to the tree difference score and the final tree-editing distance is computed by further divided by the total node count of the input part tree condition.
For each part tree, we conditionally generate 100 shape point clouds and compute the mean tree-editing distance. To get the HierInsSeg score, we simply average the mean tree-editing distances from all part trees.
Mo et al. [51] proposed a part instance segmentation method that takes as input a point cloud shape and outputs a variable number of disjoint part masks over the point cloud input, each of which represents a part instance. The method uses PointNet++ [56]
as a backbone network that extracts per-point features over the input point cloud and then performs a 200-way classification for each point with a SoftMax layer that encourages every point belongs to one mask in the final outputs. Each of the 200 predicted masks is also associated with a score within
indicating its existence. The existing and non-empty masks correspond to the final part segmentation. We refer to [51] for more details.We propose our HierInsSeg algorithm by adapting [51] to a hierarchical setting. First, we compute the statistics over all training data to obtain the maximal number of parts for each part semantics in the canonical part semantic hierarchy. Then, we define a maximal instance-level part tree template that covers all possible part trees in the training data. We adapt the same instance segmentation pipeline [51] but change the maximal number of output masks from 200 to . Finally, to make sure all children part masks sum up to the parent mask in the part tree template, we define
(16) |
To implement this, for each parent part mask, we perform one SoftMax operation over all children part masks. The root node always has .
In Table 2 of the main paper (the GT rows), we present the HierInsSeg scores operating on the real shape point clouds to provide a upper bound for the performance. In Figure 6, we also show qualitative results for performing the proposed hierarchical instance-level part segmentation over some example generated shapes. Both quantitative and qualitative results show that the proposed HierInsSeg algorithm is effective on judging if the generated shape point cloud satisfies the part tree condition.
We show our user study interface in Figure 9. We ask the users to rank three algorithms from three aspects: part structure, geometry, overall.
We present more qualitative results in Figure 10. Given the symbolic part trees as conditions, we show multiple diverse point clouds generated by our method.
Since our method deforms a point cloud sampled from a unit cube mesh for each leaf-node part geometry, we naturally obtain the mesh generation results as well. Figure 7 shows some results. Since the goal of this work is primarily for point cloud generation, we do not explicitly optimize for the mesh generation results. However, we observe reasonable mesh generation results learned by our method.
Figure 8 presents common failure cases we observe. For the chair example, the back slats are not well aligned with each other and are unevenly distributed spatially. For the table example, the connecting parts between legs and surface extrude outside the table surface. In the cabinet example, the four drawers overlap with each other as the network does not assign clear roles for them. The lamp example has the disconnection problem between the rope and the base on the ceiling. All these cases indicate that future works should study how to better model part relationships and physical constraints.