1 Introduction
Componentbased 3D object modeling is common in design of manmade objects. Creating automatic or semiautomatic tools for such componentbased modeling has been a longstanding goal in 3D object processing. Towards that goal, previous work leveraged 3D CAD model datasets with known components and component structures for creating new objects [FKS04], completing partial shapes [SFCH12], and for analyzing shape structures [KLM13, LXC17]. Recently created largescale 3D CAD datasets [Tri17, CFG15] offer significant diversity of component geometry and structure, thus increasing the potential of componentbased geometry processing for practical usages.
While a large database increases the diversity in modeling and editing shapes, it also requires the burden of annotating shape components in a consistent manner. Most previous part labeling methods are not scalable [vKXZ13], or require some human supervision [YKC16]. To avoid the labor of an annotation, while still taking advantage of the large scale of the databases, Sung et al. [SSK17] proposed an annotationfree component assembly method, which trains a neural network to retrieve plausible complementary components given a query partial object. The relations among partial objects and complementary components are learned from the data. However, the incremental approach of Sung et al. does not account for the plausibility of the full constructed shape, or can detect groups of segments which can plausibly complete a given partial shape. This functionality is important when the plausibility of the full object is important, for example when creating a new object by mixing components from different models in the database [XZCOC12], or when completing partially created objects [SKAG15].
To address these limitations, we propose an annotationfree deep learning framework which learns partial shape representations from database component assemblies, and jointly encodes two semantic relations between partial shapes:
complementarity and interchangeability. Complementarity means that the two partial shapes can be combined into a complete, semantically meaningful object. Interchangeability indicates that replacing a part of a model with another partial shape still produces a plausible new object. This relations can capture semantic similarities among partial shapes in terms of their usage in the context of full objects, even when these partial shapes are geometrically dissimilar. Both complementarity and interchangeability are closely related to each other, since interchangeability means that two partial shapes share the same set of complements.Encoding these relations in embedding spaces is not trivial. Complementarity is an irreflexive relation, therefore a naïve embedding scheme which minimizes distances among related data is not applicable. In addition, we do not have any supervision for learning interchangeability relations, and need to infer this from the complementarity relations between partial shapes. To tackle these challenges, we suggest a novel embedding approach into dual embedding spaces (Figure 1). We consider the symmetric (undirected) complementary relation as bothway directed relations, and create two embedding spaces and for oneto mapping, such that all variations of partial shapes are present in both spaces. Given two partial shapes, complementarity between them is reflected by their embeddings into the two spaces. To learn the to irreflexive complementarity mapping with this embedding scheme, we use fuzzy set representations [Zad65] for both embedding spaces, and encode the complementarity relation as the intersection of sets. When learning the complementary relations across two embedding spaces, the similarity in the same embedding space can be interpreted as interchangeability.
Key contributions:

We propose a novel dual embedding framework to learn complementarity and interchangeability relations between partial shapes.

The complementarity and interchangeability of partial shapes are encoded as inter and intrarelations in the dual embedding spaces, respectively.

Fuzzy set representations are utilized for both embedding spaces, to learn to irreflexive complementarity mapping between them.

We demonstrate the effectiveness of the proposed embedding scheme for learning the two relations between partial shapes for several shape modeling tasks, on a variety of shape categories.
2 Related Work
We review related work on componentbased 3D modeling, and structural embedding techniques using neural networks.
Componentbased 3D Modeling
Funkhouser et al. Funkhouser:2004 were the first to introduce the idea of reusing parts in the existing 3D models for creating new objects. Subsequent approaches [CKGK11, KCKK12, CKGF13] developed the idea of shape construction using labeled components, by learning the component structure and suggesting appropriate components in the interactive assembly process. Shen et al. Shen:2012 used partial scanned data as a cue for constructing complete 3D models. There, components in the database were retrieved and stitched to each other to fit to the input geometry and fill the missing area in the incomplete scans. This completion approach was further extended by Sung et al. Sung:2015, who integrated both symmetry and retrievalbased inferences to detect the missing parts. These approaches successfully demonstrated practical applications of leveraging the component structure in a given dataset of 3D models, but all of them relied on having models with labeled components, which required significant annotation effort.
Recently, Sung et al. Sung:2017 introduced a method for constructing shapes from unlabeled
components in an iterative assembly process. Specifically, given a partial shape, their method produces multiple plausible complementary components. This is done by training a neural network which jointly maps database components into the embedding space and predicts a complementary component probability distribution over that space given a partial shape. While here we also learn complementary relations, we aim at learning relations between
partial shapes, as opposed to relations between partial shapes and single components in [SSK17]. Towards that goal, we embed all possible partial objects and discover shapes that complete the query in a plausible manner. This is more challenging problem since the shape variation space of partial shapes is much larger than the space of single components. We evaluate both methods using partial shape datasets, and demonstrate that our method outperforms the previous work in quantitative and qualitative evaluations.Deep Structural Embedding
Data embedding with neural networks is widely used for encoding relations among largescale data. The advantages of inferring relations between data points in a structured embedding space, instead of learning binary indicator functions of the relations, are clearly described in the seminal work [BWCB11]: simple adaptation to various datasets, compact information storage, flexible joint encoding of different types of relations, and, most importantly, the ability to infer unseen relations from the structure of the embedding space. Notable examples of the above in language processing are [SCMN13] and [MSC13], where lowdimensional word embeddings were used to capture the relations among words, and in image processing  [SKP15], which proposed learning an embedding space of facial images, where distances directly corresponded to face similarity, independently of the face pose. Deep embedding of 3D data was utilized by Li et al. Li:2015b, who suggested a method for real time object reconstruction, by learning correlation between images and 3D models, and by Wu et al. Wu:2016b, who showed that the relations among 3D object structures can be learned using a generative adversarial network. Sung et al. Sung:2017 learned a different embedding space of complementary components for input partial shapes.
Unlike previous work, in the proposed approach we construct embedding spaces that reflect both complementarity relations, learned in a supervised manner, as well as interchangeability relations, for which no supervision is provided, and successfully discover both types of relations between previously unseen partial shapes. In addition, the majority of existing techniques for deep embedding encode reflexive and symmetric relations, such as similarities, into distances or angles between vectors in the embedding space
[KFF15]. Some recent methods focus on other types of relations, such partial order relations [VKFU16, NK17], which are asymmetric and transitive. In this work, we consider different irreflexive and symmetric complementarity relations, and propose a new embedding space construction technique to reflect these relations.3 Method
In this section, we explain how we train a neural network to jointly encode both the complementarity and interchangeability in embedding spaces. We first give an overview of the proposed approach, then describe in detail all its components.
3.1 Overview
We design binary energy functions of complementarity and interchangeability, both of which take the embedding coordinates of partial shapes as inputs. A neural network is to used to define the embedding function for partial shapes. In the training of the neural network, we create complementary pairs of partial shapes as training examples by splitting full objects into two parts. But we do not have any supervision for interchangeability. Thus, the network is trained to minimize only the complementarity energy, but is still able to predict the interchangeability from the embedding structure. In next subsections, we elaborate on how we define the embedding spaces and the binary energy functions on them. We first describe the motivation for using the dual embedding spaces to represent complementarity (Section 3.2), and how this relation is encoded across the dual spaces (Section 3.3). Then, we define the complementarity and interchangeability energy functions as fuzzy set operations on the embedding spaces, in Sections 3.4 and 3.5
, respectively. The loss functions used for the neural network training and the neural network implementation details are described in Sections
3.6 and 3.7.3.2 Dual Embedding Spaces
We first describe how to design the embedding spaces and a binary indicator function for complementarity. One can consider a graph which nodes and edges indicate partial shapes and their complementary relations, respectively. Our problem can be viewed as a graph embedding problem aimed at finding unseen edges between nodes (unseen complementarity relations) by using the geometry of partial shapes as node attributes. The binary complementarity indicator function is then defined as whether a partial shape is connected to the other, and vice versa. There are several previous techniques for the graph node embedding problem with input node attributes [PARS14, HYL17, KW17], but they all use same approach of mapping neighboring nodes to proximal locations in the embedding space, which is not applicable to our case for two reasons. First, complementarity is not transitive, meaning that, given a partial shape, a complement of its complement is generally not a complement of it. Thus, firstorder neighbors need to be discriminated from higherorder neighbors. Second, complementarity is also not reflective; a partial shape is not a complement of itself. Thus, each node should be isolated from the its neighbors in the embedding space. To handle the nontransitivity and irreflexivity, we consider dual embedding spaces as shown in Figure 1. All partial shapes live in both embedding spaces, and the complementarity relations among partial shapes are now represented by the relations between their embedding coordinates across the two spaces. Since one partial shape can have different embedding coordinates in different spaces, this allows a partial shape not to have a relation to itself. We define the structure of two embedding spaces and the interspace complementary relation in the next subsection.
3.3 Embedding as Set Inclusion
The naïve idea for constructing dual spaces is to align the positions of complementary partial shapes at the same coordinates in different spaces. However, it may cause some noncomplementary pairs to be encoded as complementary to each other. For example, in the case illustrated in Figure 2(a), shape is complementary to both and , but shape is only complementary to . Then, from the complementarity of and , both and should be placed at the location of in the other space ( and ). Because of the complementarity of and , also goes to ’s location in the other space. This leads to a complementary relation between and , which is contradictory to the assumption (See Figure 2(b)).
To avoid this problem, we suggest to relate a coordinate in one embedding space to multiple coordinates in the other space. In the aspect of graph embedding described in Section 3.2, the relation is indicated by checking both ways of whether a node is a neighbor of the other node. When considering oneway relations only, and encoding to mapping from a partial shape to its complements, we view the embedding coordinates as a representation of a set, such that the complementarity relation is encoded as inclusion from one space to the other. One choice of encoding sets and inclusions in the embedding space is using the approach of Vendrov et al. Vendrov:2016:
(1) 
where are the embedding coordinates of , respectively, and is a ‘logical and’ operator (note that Vendrov et al. use reversed direction of the inequality but we switch back to the natural direction). Since here we wish to relate embedding coordinates in two different spaces (due to the irreflexivity), given two embedding spaces and we represent the oneway complementarity as follows:
(2) 
According to the analogy with the inclusion relation, we will call and the subset and superset spaces, respectively, in the rest of paper. Then, the binary indicator function for bothway complementarity can be represented as follows:
(3) 
3.4 Fuzzy Set Interpretation
While the inclusion embedding of [VKFU16] was first applied for neural network training in that paper, the idea is actually closely related to the wellstudied fuzzy set theory [Zad65]. In fuzzy set theory, a set is represented with fuzzy memberships over a discrete set of elements, which is also called possibility distribution. Then, the notion of inclusion is defined so that the membership scores (or probabilities) over all elements of a superset are greater or equal to the corresponding scores of a subset, which is identical with the definition in Equation 1.
With the fuzzy set representation, one can consider how to define set operations analogous to classic crisp (nonfuzzy) set theory. For example, for ‘logical and’ (intersection) , and ‘inclusive or’ (union) , there are various ways of defining the operations with fuzzy sets (see Section 3 in [Zim01] for details), but the simplest form is using minimum and maximum operations:
(4)  
In the neural net training, we need to fuzzify the notion of inclusion to obtain a continuous loss function. Vendrov et al. suggest to penalize when the embedding coordinates of a subset are greater than the coordinates of the superset, elementwise:
(5) 
This is actually the same as making the subset to be equal to the intersection of the two sets and , using the above definition of intersection:
(6) 
Using Equation 2 and Equation 3, we can define the energy functions for the oneway complementarity and the bothway complementarity , as follows:
(7) 
(8) 
3.5 Interchangeability via Complementarity
In Section 3.3 and 3.4, we described how complementarity is represented as an interspace fuzzy set operation. Now we discuss how we define an intraspace fuzzy set operation that measures the degree of interchangeability.
It is obvious that two partial shapes have exactly same complements with same energy values when they have the same embedding coordinates in both embedding spaces. This implies that two partial shapes with similar embedding coordinates in each embedding space have a similar set of complements. In the following propositions, we show that how the union and intersection of two fuzzy sets represented by the embedding coordinates are related to the complementarity energies with arbitrary partial shapes.
Proposition 1
Refer to Appendix
Proposition 2
Refer to Appendix
Corollary 1
Corollary 1 shows that the union on the space and the intersection on the space bound the sum of complementarity energies for arbitrary partial shapes. When restrict the norm of all embedding coordinates to be one, one can consider how close the norm of and are to one as a measure of interchangeability energy :
(9)  
3.6 Neural Network Loss
The loss function for the neural network training is defined using the complementarity energy (Equation 8). Given complementary pairs in a batch, we consider all mismatched pairs as negative examples. We suggest two loss functions that can be used depending on the application. For complement retrieval tasks, we use pairwise ranking loss as introduced in [KFF15, VKFU16]:
(10)  
where is a given margin parameter. We use in all our experiments. This pairwise ranking loss learns relative distances between positive and negative pairs. But we need a commensurable measure of complementarity for the interchangeability measure (Equation 9) since it is based on the upper bound of the complementarity energies. Thus, we introduce another loss function learning absolute errors with a threshold:
(11)  
where is a learnable threshold parameter. This loss function makes the energy of positive pairs smaller than and the energy of negative pair greater than with the margin between them.
3.7 Neural Network Architecture and Training
We used the PointNet architecture Qi:2017 as a basic building block of the proposed embedding network, as illustrated in Figure 3. Since all partial shapes are embedded into both embedding spaces, we have two separate PointNet Siamese architectures for learning the and embedding functions. We feed both and networks with sampled points and , to produce and . Note that we use the unit norm constraint for output embedding coordinates as described in Section 3.5. Both networks receive inputs as centered
point clouds of partials shapes, translated so that the centers of axisaligned bounding boxes are located at the origin. Thus, the relation prediction is performed without the information of the partial shape location. In all experiments, we train the network for 1,000 epochs with batch size of 32, and use ADAM optimizer with 1.0E3 initial learning rate, 0.7 decay rate and 200K decay steps. The embedding dimension is fixed to be
.4 Results
4.1 Partial Shape Dataset
In our experiments, we use the ShapeNet 3D component dataset of Sung et al. Sung:2017. It consists of 9 model categories from the ShapeNet [CFG15], each of which has up to 2.4K models. All models are consistently aligned, scaled to have unit radius, and presegmented into components. The components were created from the ShapeNet CAD model scene graphs; the leaf node components of scene graphs were preprocessed, so that symmetric components were grouped into a single component, and small components were merged with the adjacent larger components. We build contact graphs of components based on their proximity, and during training generate complementary partial shape pairs by randomly splitting the contact graphs into two subgraphs. The dataset is split into training and test sets, and separate networks are learned for different model categories.
4.2 Qualitative Evaluation
Complementarity Evaluation
Figure 4 shows examples of the top5 complement retrieval, in terms of the complementarity energy (Equation 8), one example per category. In the retrieval experiment, we used all possible partial shapes in the test set as a database of complement candidates. The centered query and retrieved partial shapes were automatically stitched, using the placement neural network introduced in [SSK17]. Here, the placement net was retrained with partial shapes instead of single components. We note that the retrieved complementary partial shapes have different geometries and styles, but most of them complement the queries in a plausible manner. For example, given a chair query with swivel legs removed, both four legs and swivel legs are retrieved, and all the results look plausible. In another example, given the stretcher of a table, different tables having appropriate widths fitting the stretcher are retrieved. The three lamps are complemented with suitable mount accessories, including the three wall mount plates of the last complement.
Interchangeability Evaluation
Figure 8 shows examples of interchangeable partial shapes extraction. Here, we also used all possible partial shapes in the test set as a database of interchangeable candidates. For each query shape, we extracted its top5 nearest partial shape neighbors, now using the interchangeability measure (Equation 9). The results demonstrate that our method can properly learn interchangeability among partial shapes, even when they are constructed of different components and have dissimilar geometries. For instance, table stand bottoms (Figure 8, second column from the right) have different shapes, but they can be replaced by each other in any table. For the partial chair without a seat, we successfully retrieve all partial chairs without seats, while all retrievals have different back and leg parts. The lamp components retrieved by a query have various shapes with different sizes, but all of them are shade parts with tubings.
Category (# Partial Shapes)  Airplane (4140)  Car (5770)  Chair (8374)  Guitar (198)  Lamp (1778)  Rifle (1184)  Sofa (4452)  Table (4594)  Watercraft (1028)  Mean  
Recall@1  CM  9.9  2.4  4.9  19.2  1.7  1.9  3.9  2.7  0.7  4.3 
Ours  17.5  5.8  8.0  23.7  5.1  7.3  6.7  4.1  3.2  7.8  
Recall@10  CM  48.6  15.5  27.2  67.7  11.1  17.1  20.0  15.5  7.3  23.5 
Ours  61.3  30.5  35.0  72.2  19.7  23.5  30.1  19.2  14.3  32.9  
Median Percentile Rank  CM  99.8  98.8  99.6  97.0  89.6  94.3  98.5  98.3  87.0  97.9 
Ours  99.9  99.5  99.7  98.5  90.4  95.8  99.2  98.5  88.7  98.4  
Mean Percentile Rank  CM  98.4  96.4  98.3  94.5  81.4  88.2  94.0  94.9  77.6  94.8 
Ours  98.5  97.2  98.5  93.8  79.9  89.0  94.9  95.0  78.7  95.2 
4.3 Comparisons
We compare our method with ComplementMe Sung:2017, which also learns complementary relations among 3D shapes, but for a different purpose. ComplementMe was designed for an iterative component assembly task, therefore it retrieves a single component for each query partial shape in every iteration. Thus, when applied to fully automatic shape synthesis or completion, ComplementMe has limitations of accumulating noise in successive iterations and missing a notion of termination. Contrarily, the proposed method finds groups of components fully completing the query shape in a single retrieval step. Another difference in terms of the difficulty of the problems is that the proposed method handles much larger shape variation space since it embeds all possible partial shapes, while ComplementMe only embeds single components.
ComplementMe approach can also be adapted to handle partial shapes instead of individual components. However, we argue that the proposed method is more effective for learning both complementarity and interchangeability relations due to the differences in relation representations. In ComplementMe, the oneway complementarity energy function is defined as a negative loglikelihood of a Gaussian mixture probability density function. The multimodality of the distribution is essential in ComplementMe since a single Gaussian raises the embedding collapse problem described in Section
3.2 and Figure 2(b). But it also leads to two limitations. First, a larger number of Gaussians can better encode all possible complementary relations, but it also increases the number of output parameters, making the network more difficult to train. Thus, the representation power can be impaired either when the number of Gaussians is too small or too large. Second, some interchangeability relations may not be captured with the multimodal distributions since two interchangeable partial shapes can be included in different modes. Our fuzzysetbased representation, using a single vector to represent a partial shape, is much more concise than the Gaussian mixture representation and does not have the above multimodality issues.In the following experiments, we demonstrate that the difference in the relation representations affects the performance of both complementary and interchangeable shape retrievals in practice. For comparison, we retrain the ComplementMe retrieval network using our partial shape dataset, and the same training parameters and embedding dimension as in the proposed method. We also use our loss functions (Equation 10 and Equation 11) instead of the triplet loss in ComplementMe. Note that the losses are identical except for the larger number of negative pairs used in the proposed loss ( vs. in ComplementMe, which makes the training more efficient).
Evaluating whether the retrieved shapes are complementary or interchangeable is nontrivial since the criteria are subjective. Human annotations may not be consistent and can be prone to bias. Thus, for the quantitative evaluation of the complement retrievals, we measure recall and rank of the ground truth complements, following the recent retrieval work Klein:2015,Vendrov:2016. Table 1 shows the quantitative evaluation results when testing both ComplementMe and our method with all possible partial shapes in the test set as queries (the partial shape number are in parentheses at the first row, for each shape category). Recall@ indicates the percentage of the ground truth in the top rank retrievals, and percentile rank indicates the percentage of partial shapes having ranks equal or greater than the rank of the ground truth. The proposed fuzzy set representation outperforms ComplementMe in all cases, except for mean percentile ranks for two model categories.
Figure 6 shows complementary shape retrieval results of both methods. Although ComplementMe produces reasonable results, some retrievals do not fully complement the query shape, resulting in missing areas in the combined shapes: e.g., armrests in a chair, a trunk in a car, and the bottom parts of a sofa and a lamp. Our method successfully creates complete plausible output shapes with the retrieved complements.
For the interchangeable shape retrievals, we also compare the proposed method with ComplementMe, and additionally with MultiView CNN (MVCNN) descriptor [SMKLM15]. To evaluate the retrievals quantitatively, we use semantic single parts [YKC16] instead of partial shapes, and measure the correlation between the detected interchangeable parts and their semantic labels. While this measure is imperfect since some parts with different labels may have similar shapes, this is uncommon for most shape categories. Given a query semantic part, we find its interchangeable parts using each one of the methods: the interchangeability measure (Equation 9) for our method, and the distances in the embedding and descriptor spaces for ComplementMe and MVCNN, respectively. Then, we compute the ratio of the number of neighbors having the same semantic label as the query and the number of all retrieved neighbors. Higher values mean that more neighboring parts share the semantic label with the query. Figure 7 shows the average ratio of equal nearest neighbor labels, as a function of the number of the neighbors, up to 10% of the all parts. Our method (green line) consistently produces higher correlation with semantic labels compared to other methods in all categories, except cars, for which training data mostly have coarse segmentations not into parts but into larger partial shapes.
Figure 8 visualizes some results of interchangeable partial shape retrievals using both methods: query shape, rank1 retrieval of ours, and rank1 retrieval of ComplementMe, from left to right. We also list the retrieval ranks based on the interchangeability measure (embedding distances for ComplementMe and energy function for ours) of the other method. The results indicate that the proposed method is able to detect more semantically meaningful shapes as its rank1 retrievals, as compared to ComplementMe. Furthermore, according to the ComplementMe ranks, it maps both the shapes incorrectly retrieved by it and the plausible shapes retrieved by the proposed method nearby in its embedding space. In contrast, the proposed method is able to discriminate between the interchangeable and noninterchangeable shapes, as indicated by its ranks for the shapes retrieved by ComplementMe.
4.4 Application  Partial scan completion
One potential application of the proposed method is completion of partial scan data with various partial shapes form the dataset. Figures [ and 9 show examples of completing real and synthetic partial scan data with complements retrieved with our method. We first use ICP with manual initial scan pose, to find a partial shape in our test dataset that best fits the input point cloud (shown with pink). We then retrieve complementary partial shapes (shown with green) using our fuzzy set operations. Note that, unlike other shape completion methods, such as [SKAG15, DQN17], which create a single completion result, our method can easily provide multiple plausible outputs according to the partiality of the input data.
4.5 Computation time
Both training and test are performed with a single NVIDIA TITAN Xp graph card. It took 4.5 hours to train a network for 50 iterations. At test time, computing the embedding coordinates of 5 partial shapes and their corresponding complementarity/interchangeability energies w.r.t. the query, takes in a few seconds.
5 Conclusion and future work
We have presented a novel neural networkbased framework for learning complementarity and interchangeability relations between partial shapes. The two relations are learned jointly by embedding the partial shapes into dual embedding spaces, where the shapes are encoded using the fuzzy set representation. This embedding allows us to model the complementarity and the interchangeability as fuzzy set operations performed across and within the embedding spaces, respectively. The method is fully automatic, and was trained using a dataset of models with unlabeled components. Qualitative and quantitative evaluations demonstrate that our method captures well both types of relations, and produces meaningful results when applied to previously unseen shapes.
While our framework is applicable to partial shape completion, it is limited to filling the missing area at the level of the components, and does not facilitate symmetry information as done in [SKAG15] (Figure 10(a)). Also, small or thin components can be neglected in the retrieved shapes due to the limited resolution of point clouds used as neural network inputs (Figure 10(b)).
In future work, we plan to investigate how the fuzzy set operations can be applied to represent the other shape relations, and also the relations among different modalities, e.g. images and 3D shapes.
Acknowledgements
We thank the anonymous reviewers for their comments and suggestions. M. Sung acknowledges the support in part by the Korea Foundation for Advanced Studies. A. Dubrovina acknowledges the support in part by The Eric and Wendy Schmidt Postdoctoral Grant for Women in Mathematical and Computing Sciences. L. Guibas acknowledges the support of NSF grants IIS1528025, DMS1521608, and DMS1546206, and gifts from the Adobe systems, Amazon AWS and Autodesk corporations.
References
 [BWCB11] Bordes A., Weston J., Collobert R., Bengio Y.: Learning structured embeddings of knowledge bases. In AAAI (2011).
 [CFG15] Chang A. X., Funkhouser T. A., Guibas L. J., Hanrahan P., Huang Q., Li Z., Savarese S., Savva M., Song S., Su H., Xiao J., Yi L., Yu F.: Shapenet: An informationrich 3d model repository. CoRR abs/1512.03012 (2015).
 [CKGF13] Chaudhuri S., Kalogerakis E., Giguere S., Funkhouser T.: AttribIt: Content creation with semantic attributes. In UIST (2013).
 [CKGK11] Chaudhuri S., Kalogerakis E., Guibas L., Koltun V.: Probabilistic reasoning for assemblybased 3d modeling. In SIGGRAPH (2011).
 [DQN17] Dai A., Qi C. R., Nießner M.: Shape completion using 3dencoderpredictor cnns and shape synthesis. In CVPR (2017).
 [FKS04] Funkhouser T., Kazhdan M., Shilane P., Min P., Kiefer W., Tal A., Rusinkiewicz S., Dobkin D.: Modeling by example. In SIGGRAPH (2004).
 [HYL17] Hamilton W., Ying Z., Leskovec J.: Inductive representation learning on large graphs. In NIPS. 2017.
 [KCKK12] Kalogerakis E., Chaudhuri S., Koller D., Koltun V.: A probabilistic model for componentbased shape synthesis. In SIGGRAPH (2012).
 [KFF15] Karpathy A., FeiFei L.: Deep visualsemantic alignments for generating image descriptions. In CVPR (2015).
 [KLM13] Kim V. G., Li W., Mitra N. J., Chaudhuri S., DiVerdi S., Funkhouser T.: Learning partbased templates from large collections of 3d shapes. In SIGGRAPH (2013).
 [KLSW15] Klein B., Lev G., Sadeh G., Wolf L.: Associating neural word embeddings with deep image representations using fisher vectors. In CVPR (2015).
 [KW17] Kipf T. N., Welling M.: Semisupervised classification with graph convolutional networks. In ICLR (2017).
 [LSQ15] Li Y., Su H., Qi C. R., Fish N., CohenOr D., Guibas L. J.: Joint embeddings of shapes and images via cnn image purification. In SIGGRAPH (2015).

[LXC17]
Li J., Xu K., Chaudhuri S., Yumer E., Zhang H., Guibas L.:
Grass: Generative recursive autoencoders for shape structures.
In SIGGRAPH (2017).  [MSC13] Mikolov T., Sutskever I., Chen K., Corrado G., Dean J.: Distributed representations of words and phrases and their compositionality. In NIPS (2013).
 [NK17] Nickel M., Kiela D.: Poincaré embeddings for learning hierarchical representations. In NIPS (2017).
 [PARS14] Perozzi B., AlRfou R., Skiena S.: Deepwalk: Online learning of social representations. In KDD (2014).
 [QSMG17] Qi C. R., Su H., Mo K., Guibas L. J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR (2017).

[SCMN13]
Socher R., Chen D., Manning C. D., Ng A. Y.:
Reasoning with neural tensor networks for knowledge base completion.
In NIPS (2013).  [SFCH12] Shen C.H., Fu H., Chen K., Hu S.M.: Structure recovery by part assembly. In SIGGRAPH Asia (2012).
 [SKAG15] Sung M., Kim V. G., Angst R., Guibas L.: Datadriven structural priors for shape completion. In SIGGRAPH Asia (2015).

[SKP15]
Schroff F., Kalenichenko D., Philbin J.:
Facenet: A unified embedding for face recognition and clustering.
In CVPR (2015). 
[SMKLM15]
Su H., Maji S., Kalogerakis E., LearnedMiller E.:
Multiview convolutional neural networks for 3d shape recognition.
In ICCV (2015).  [SSK17] Sung M., Su H., Kim V. G., Chaudhuri S., Guibas L.: Complementme: Weaklysupervised component suggestions for 3d modeling. In SIGGRAPH Asia (2017).
 [Tri17] Trimble: 3d warehouse, 2017. URL: https://3dwarehouse.sketchup.com/.
 [VKFU16] Vendrov I., Kiros R., Fidler S., Urtasun R.: Orderembeddings of images and language. In ICLR (2016).
 [vKXZ13] van Kaick O., Xu K., Zhang H., Wang Y., Sun S., Shamir A., CohenOr D.: Cohierarchical analysis of shape structures. In SIGGRAPH (2013).
 [WZX16] Wu J., Zhang C., Xue T., Freeman W. T., Tenenbaum J. B.: Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In NIPS (2016).
 [XZCOC12] Xu K., Zhang H., CohenOr D., Chen B.: Fit and diverse: Set evolution for inspiring 3d shape galleries. In SIGGRAPH (2012).
 [YKC16] Yi L., Kim V. G., Ceylan D., Shen I.C., Yan M., Su H., Lu C., Huang Q., Sheffer A., Guibas L.: A scalable active framework for region annotation in 3d shape collections. SIGGRAPH Asia (2016).
 [Zad65] Zadeh L.: Fuzzy sets. Information and Control (1965).
 [Zim01] Zimmermann H.J.: Fuzzy Set Theory  and Its Applications. Springer, 2001.
Appendix
Proposition 1
Proposition 2