1 Introduction
As big geometric data is becoming more available (e.g., from fast and commodity 3D sensing and crowdsourcing shape modeling), the interest in processing of 3D shapes and scenes has been shifting towards datadriven techniques. These techniques leverage data to facilitate highlevel shape understanding, and use this analysis to build effective tools for modeling, editing, and visualizing geometric data. In general, these methods start by discovering patterns in geometry and structure of shapes, and then relate them to highlevel concepts, semantics, function, and models that explain those patterns. The learned patterns serve as strong priors in various geometry processing applications. In contrast to traditional approaches, datadriven methods analyze a set of shapes jointly to extract and model meaningful mappings and correlations in the data, and learn priors directly from the data instead of relying on hardcoded rules or explicitly programmed instructions.
The idea of utilizing data to support geometry processing has been exploited and practiced for many years. However, most existing works based on this idea are confined to examplebased paradigm, thus mostly leveraging only one core concept of datadriven techniques – information transfer. Typically, the input to these problems includes one or multiple exemplar shapes with prescribed or precomputed information of interest, and a target shape that needs to be analyzed or processed. These techniques usually establish a correlation between the source and the target shapes and transfer the interesting information from the source to the target. The applications of such approach include a variety of methods in shape analysis (e.g. [SY07]) and shape synthesis (e.g. [Mer07, MHS14]).
As the number of available 3D shapes becomes significantly large, geometry processing techniques supported by these data go through a fundamental change. Several new concepts emerge in addition to information transfer, opening space for developing new techniques for shape analysis and content creation. In particular, the rich variability of 3D content in existing shape repositories makes it possible to directly reuse the shapes or parts for constructing new 3D models [FKS04]. Content reuse for 3D modeling is perhaps the most straightforward application of big 3D geometric data, providing a promising approach to address the challenging 3D content creation problem. In addition, highlevel understanding of shapes can benefit from coanalyzing collections of shapes. Several analysis tools demonstrate that shape analysis is more reliable if it is supported by observing certain attributes in a set of semantically related shapes instead of a single object. Coanalysis requires a critical step of finding the correlation between multiple shapes in the input set, which is substantially different from building pairwise correlation. A key concept to coanalysis is consistency of the correlations in the entire set, which has both semantic [KHS10, SvKK11, WAvK12] and mathematical [HG13] justifications.
Relation to knowledgedriven shape processing.
Prior to the emergence of datadriven techniques, highlevel shape understanding and modeling was usually achieved with knowledgedriven methods. In knowledgedriven paradigm, geometric and structural patterns are extracted and interpreted with the help of explicit rules or handcrafted parameters. Such examples include heuristicsbased shape segmentation
[Sha08] and procedural shape modeling [MWH06]. Although these approaches find certain empirical success, they exhibit several inherent limitations. First, it is extremely hard to hardcode explicit rules and heuristics that can handle the enormous geometric and structural variability of 3D shapes and scenes in general. As a result, knowledgedriven techniques are unlikely to generalize successfully to large and diverse shape collections. Another issue is that it is usually hard for nonexpert users to interact with knowledgedriven techniques that require as input “lowlevel” geometric parameters or instructions.In contrast to knowledge drive methods, datadriven techniques learn representation and parameters from data. Their usually do not depend on hardcoded prior knowledge, and consequently do not rely on handcrafted parameters, making these techniques more dataadaptive and thus lead to significantly improved performance in many practical settings. The success of datadriven approaches, backed by machine learning techniques, heavily relies on the accessibility of large data collections. We have witnessed the success of increasing the training set by orders of magnitude to significantly improve the performance of common machine learning algorithms [BB01]. Thus, the recent developments in 3D modeling tools and acquisition techniques for 3D geometry, as well as availability of large repositories of 3D shapes (e.g., Trimble 3D Warehouse, Yobi3D , etc.), offer great opportunities for developing datadriven approaches for 3D shape analysis and processing.
Relation to structureaware shape processing.
This report is closely related to the recent survey on “structureaware shape processing” by Mitra and coworkers [MWZ14], which concentrates on techniques for structural analysis of 3D shapes, as well as highlevel shape processing guided by structurepreservation. In that survey, shape structure is defined as the arrangement and relations between shape parts, which is analyzed through identifying shape parts, part parameters, and part relations. Each of the three can be determined through manual assignment, predefined model fitting and datadriven learning.
In contrast, our report takes from a very different perspective—how the availability of big geometric data has changed the field of shape analysis and processing. In particular, we want to highlight several key distinctions: First
, datadriven shape processing goes beyond structure analysis. For example, leveraging large shape collections may benefit a wider variety of problems in shape understanding and processing, such as parametric modeling of shape space
[ACP03], hypothesis generation for object and scene understanding
[ZSSS13, SLH12], and information transfer between multimodal data [WGW13, SHM14]. Datadriven shape processing may also exploit the datacentered techniques in machine learning such as sparse representation [RR13] and feature learning [LBF13], which are not preconditioned on any domainspecific or structural prior beyond raw data. Second, even within the realm of structureaware shape processing, datadriven approaches are arguably becoming the dominant branch due to their theoretical and practical advantages, availability of large shape repositories, and recent developments in machine learning.Vision and motivation.
With the emergence of “big data”, many scientific disciplines have shifted their focus to datadriven techniques. Although 3D geometry data is still far from being as ubiquitous as some other data formats (e.g., photographs), rapidly growing number of 3D models, recent developments in fusing 2D and 3D data, and invention of commodity depth cameras, have made the era of “big 3D data” more promising than ever. At the same time, we expect datadriven approaches to take one of the leading roles in understanding and reconstruction of acquired 3D data, as well as synthesis of new shapes. In summary, datadriven geometry processing will close the loop from acquisition, analysis, and processing to generation of 3D shapes (see Figure 1), and will be a key tool for manipulating big visual data.
Recent years have witnessed a rapid development of datadriven geometry processing algorithms, both in computer graphics and in computer vision communities. Given the research efforts and wide interests in the subject, we believe many researchers would benefit from a comprehensive and systematic survey. We also wish such a survey can simulate new theories, problems, and applications,
Organization.
This survey is organized as follows. Section 2
gives a highlevel overview of datadriven approaches and classifies datadriven methods with respect to their application domains. This section also provides two representative examples for the reader to understand the general workflow of datadriven geometry processing. The following sections survey various datadriven shape processing problems in detail. Finally, we conclude by listing a set of key challenges and providing a vision on future directions.
Accompanying online resources.
In order to assist the readers in learning and leveraging the basic algorithms, we provide an online wikipage [XKHK14], which collects tools, source codes, together with benchmark data for typical problems and applications of datadriven shape processing. This page will also provide links and data mining tools for obtaining large data collections of shapes and scenes. The website would serve as a starting point for those who are conducting research in this direction, we also expect it to benefit a wide spectrum of researchers from related fields.
2 Overview
In this section, we provide a highlevel overview of the main components and steps of datadriven approaches for processing 3D shapes and scenes. Although the pipeline of these methods significantly vary depending on their particular applications and goals, a number of components tend to be common: the input data collection and processing, data representations and feature extraction, learning and inference. Representation, learning and inference are critical components of machine learning approaches in general [KF09]. In the case of shape and scene processing, each of these components poses several interesting and unique problems when dealing with 3D geometric data. These problems have greatly motivated the research on datadriven geometry processing, and in turn brought new challenges to computer vision and machine learning communities, as reflected by the increasing interest in 3D visual data from these fields. Below, we discuss particular characteristics and challenges of datadriven 3D shape and scene processing algorithms. Figure 2 provides a schematic overview of the most common components of these algorithms.
2.1 3D data collection
Shape representation. A main component of datadriven approaches for shape and scene processing is data collection, where the goal is acquire a number of 3D shapes and scenes depending on the application. When shapes and scenes are captured with scanners or depth sensors, their initial representation is in the form of range data or unorganized point clouds. Several datadriven methods for reconstruction, segmentation and recognition directly work on these representations and do not require any further processing. On the other hand, online repositories, such as the Trimble 3D Warehouse, contain millions of shapes and scenes that are represented as polygon meshes. A large number of datadriven techniques are designed to handle complete shapes in the form of polygon meshes created by 3D modeling tools or reconstructed from point clouds. Choosing which representation to use depends on the application. For example, datadriven reconstruction techniques aim for generating complete shapes and scenes from noisy point clouds with missing data. The reconstructed shapes can then be processed with other datadriven methods for categorization, segmentation, matching and so on. Developing methods that can handle any 3D data representation, as well as jointly reconstruct and analyze shapes is a potential direction for future research we discuss in Section 10.
When polygon meshes are used as the input representation, an important aspect to consider is whether and how datadriven methods will deal with possible “defects”, such as nonmanifold and nonorientable sets of polygons, inverted faces, isolated elements, selfintersections, holes and topological noise. The vast majority of meshes available in online repositories have these problems. Although there is a number of mesh repairing tools (see [CAK12] for a survey), they may not handle all different types of “defects”, and can take a significant amount of time to process each shape in large datasets. To avoid the issues caused by these “defects”, some datadriven methods uniformly sample the input meshes and work on the resulting pointbased representation instead (e.g., [CKGK11, KLM13]).
Datasets.
Although it is desirable to develop datadriven methods that can learn from a handful of training shapes or scenes, this is generally a challenging problem in machine learning [FFFP06]. Several datadriven methods in computer vision have been particularly successful due to the use of very large datasets that can reach the size of several millions of images [TFF08]. In contrast, datadriven approaches for 3D shape and scene processing approaches have mostly relied on datasets that reach the order of a few thousands so far (e.g., Princeton Shape Benchmark [SMKF04], or datasets collected from the web [KLM13]). Online repositories contain large amount of shapes, which can lead to the development of methods that will leverage datasets that are orders of magnitudes larger than the ones currently used. Another possibility is to develop synthetic datasets. A notable example is the pose and part recognition algorithm used in Microsoft’s Kinect that relies on 500K synthesized shapes of human bodies in different poses [SFC11]. In general, large datasets are important to capture the enormous 3D shape and scene variability, and can significantly increase the predictive performance and usability of learning methods. A more comprehensive summary of the existing online data collections can be found on our wikipage [XKHK14].
2.2 3D data processing and feature representation
It is common to perform some additional processing on the input representations of shapes and scenes before executing the main learning step. The reason is that the input representations of 3D shapes and scenes can have different resolutions (e.g., number of points or faces), scale, orientation, and structure. In other words, the input shapes and scenes do not initially have any type of common parameterization or alignment. This is significantly different from other domains, such as natural language processing or vision, where text or image datasets frequently come with a common parameterization beforehand (e.g., images with the same number of pixels and objects of consistent orientation).
To achieve a common parameterization of the input shapes and scenes, one popular approach is to embed them in a common geometric feature space. For this purpose a variety of shape descriptors have been developed. These descriptors can be classified into two main categories: global shape descriptors
that convert each shape to a feature vector, and
local shape descriptors that convert each point to a feature vector. Examples of global shape descriptors are Extended Gaussian Images [Hor84], 3D shape histograms [AKKS99, CK10a], spherical functions [SV01], lightfield descriptors [CTSO03], shape distributions [OFCD02], symmetry descriptors [KFR04], spherical harmonics [KFR03], 3D Zernicke moments
[NK03], and bagsofwords created out of local descriptors [BBOG11]. Local shape descriptors include surface curvature, PCA descriptors, local shape diameter [SSCO08], shape contexts [BMP02, KHS10, KBLB12], spin images [JH99], geodesic distance features [ZMT05], heatkernel descriptors [BBOG11], and depth features [SFC11]. Global shape descriptors are particularly useful for shape classification, retrieval and organization. Local shape descriptors are useful for partial shape matching, segmentation, and point correspondence estimation. Before using any type of global or local descriptor, it is important to consider whether the descriptor will be invariant to different shape orientations, scales, or poses. In the presence of noise and irregular mesh tessellations, it is important to robustly estimate local descriptors, since surface derivatives are particularly susceptible to surface and sampling noise
[KSNS07].Sometimes it is common to use several different descriptors, and let the learning step decide which ones are more relevant for each class of shapes [KHS10]
. A promising future direction is to develop datadriven methods that learn feature representations from raw 3D geometric data, enlightened by the recent hot topic of deep learning
[Ben09]. Similar direction is already explored in computer vision for 2D images [YN10]. In 3D, some works attempt feature learning on the volumetric representation of 3D shapes or essentially 3D images [LBF13]. A more popular approach is to apply deep learning directly on the raw RGBD data captured by a depth camera [SHB12, BSWR12, BRF14].Instead of embedding shapes in a common geometric feature space, several methods instead try to directly align shapes in Euclidean space. We refer the reader to the survey on dynamic geometry processing for a tutorial on rigid and nonrigid registration techniques [CLM12]. An interesting extension of these techniques is to include the alignment process in the learning step of datadriven methods, since it is interdependent with other shape analysis tasks such as shape segmentation and correspondences [KLM13].
Some datadriven methods require additional processing steps on the input. For example, learning deformation handles or fully generative models of shapes usually rely on segmenting the input shapes into parts with automatic algorithms [HKG11, SvKK11] and representing these parts with surface abstractions [YK12] or descriptors [KCKK12]. To decrease the amount of computation required during learning, it is also common to represent the shapes as a set of patches (superfaces) [HKG11] inspired by the computation of superpixels in image segmentation.
2.3 Learning and Inference
The processed representations of shapes and scenes are used to perform learning and inference for a variety of applications: shape classification, segmentation, matching, reconstruction, modeling, synthesis, scene analysis and synthesis. The learning procedures significantly vary depending on the application, thus we discuss them individually in each of the following sections on these applications. As a common theme, learning is viewed as an optimization
problem that runs on a set of variables representing geometric, structural, semantic or functional properties of shapes and scenes. There is usually a single or multiple objective (or loss) functions for quantifying preferences for different models or patterns governing the 3D data. After learning a model from the training data, inference procedures are used to predict values of variables for new shapes or scenes. Again, the inference procedures vary depending on the application, and are discussed separately in the following sections. It is common that inference itself is an optimization problem, and sometimes is part of the learning process when there are latent variables or partially observed input shape or scene data.
A general classification of the different types of algorithms used in datadriven approaches for shape and scene processing can be derived from the type of input information available during learning:

Supervised learning algorithms are trained on a set of shapes or scenes annotated with labeled data. For example, in the case of shape classification, these labeled data can have the form of tags, while in the case of segmentation, the labeled data have the form of segmentation boundaries or part labels. The labeled data can be provided by humans or generated synthetically. After learning, the learned models are applied on different sets of shapes (test shapes) to produce results relevant to the task.

Unsupervised algorithms coanalyze the input shapes or scenes without any additional labeled data i.e., the desired output is unknown beforehand. The goal of these methods is to discover correlations in the geometry and structure of the input shape or scene data. For example, unsupervised shape segmentation methods usually perform some type of clustering in the feature space of points or patches belonging to the input shapes.

Semisupervised
algorithms make use of shapes (or scenes) with and without any labeled data. Active learning is a special case of semisupervised learning in which a learning algorithm interactively queries the user to obtain desired outputs for more data points related to shapes.
In general, supervised methods tend to output results that are closer to what a human would expect given the provided labeled data, however they may fail to produce desirable results when the training shapes (or scenes) are largely geometrically and structurally dissimilar with the test shapes (or scenes). They also tend to require a substantial amount of labeled information as input, which can become a significant burden for the user. Unsupervised methods can deal with collections of shapes and scenes with larger variability and require no human supervision. However, they sometimes require parameter tuning to yield the desired results. Semisupervised methods represent a tradeoff between supervised and unsupervised methods: they provide more direct control to the user about the desired result compared to unsupervised methods, and often produce considerable improvements in the results by making use of both labeled and unlabeled shapes or scenes compared to supervised methods.
The datadriven loop.
An advantageous feature of datadriven shape processing is that the output data, produced by learning and inference, typically come with rich semantic information. For example, datadriven shape segmentation produces parts with semantic labels [KHS10]; datadriven reconstruction is commonly coupled with semantic part or shape recognition [SFCH12, NXS12]; datadriven shape modeling can generate readily usable shapes inheriting the semantic information from the input data [XZZ11]. These processed and generated data can be used to enrich the existing shape collections with both training labels and reusable contents, which in turn benefit subsequent learning. In a sense, datadriven methods close the loop of data generation and data analysis for 3D shapes and scenes; see Figure 2. Such concept has been practiced in several prior works, such as the datadriven shape reconstruction framework proposed in [PMG05] (Figure 11).
Pipeline example.
To help the reader grasp the pipeline of datadriven methods, a schematic overview of the components in Figure 2. Depending on the particular application, the pipeline can have several variations, or some components might be skipped. We discuss the main components and steps of algorithms for each application in more detail in the following sections. A didactic example of the pipeline in the case of supervised shape segmentation is shown in Figure 3. The input shapes are annotated with labeled part information. A geometric descriptor is extracted for each point on the training shapes, and the points are embedded in a common feature space. The learning step uses a classification algorithm that nonlinearly separates the input space into a set of regions corresponding to part labels in order to optimize classification performance (more details are provided in Section 4). Given a test shape, a probabilistic model is used to infer part labels for each point on that shape based on its geometric descriptor in the feature space.
2.4 A comparative overview
Before reviewing the related works in detail under various applications, we provide a comparative overview of the entire body of works to be reviewed in this survey (see Table 4), to correlate these methods under a set of criteria for datadriven approach to shape analysis and processing:

Training data. We concern about the representation, preprocessing and scale of training data. Note that once a model is learned from the training data, it can be used to inference on test data of different modality. For single shapes, the mostly adopted representations are mesh model and point cloud. 3D scenes are typically represented as an arrangement of individual objects (mesh model). Preprocessing include presegmentation, oversegmentation, prealignment, initial correspondence, and labeling.

Feature. Roughly speaking, there are two types of features involved in datadriven shape processing. The most commonly used features are lowlevel ones, such as local geometric features (e.g., local curvature) and global shape descriptor (e.g. shape distribution [OFCD02]). If the input shapes are presegmented into meaningful parts, highlevel structural features (spatial relationship) can be derived. Generally, working with highlevel features enables the learning of more powerful models for more advanced inference tasks, such as structural analysis [MWZ14], on more complex data such as manmade objects and scenes.

Learning model/approach. The specific choice of learning method is applicationdependent. In most cases, machine learning approaches are adapted to geometric data with feature extraction. For some problems, such as shape correspondence, the core problem is to extract geometric correlation between different shapes, in an unsupervised manner, which itself can be seen as a learning problem specific to geometry processing.

Learning type. As discussed above, there are three basic types of datadriven approaches, depending on the availability of labeled training data: supervised, semisupervised and unsupervised.

Learning outcome.
The learning would produce a parametric or nonparametric model (classifier, clustering, regressor, etc.) used for inference, a learned distance metric which can be utilized for further analysis, and/or feature representations learned from raw data.

Application. The main applications of datadriven shape analysis and processing are: classification, segmentation, correspondence, modeling, synthesis, reconstruction, exploration and organization.
3 Shape Classification
Datadriven techniques commonly make assumptions about the size and homogeneity of the input data set. In particular, existing analysis techniques often assume that all models belong to the same class of objects [KLM13] or scenes [FSH11], and cannot directly scale to entire repository such as the Trimble 3D Warehouse [Tri14]. Similarly, techniques for datadriven reconstruction of indoor environments assume that the input data set only has furniture models [NXS12], while modeling and synthesis interfaces restrict the input data to particular object or scene classes [CKGK11, KCKK12, FRS12]. Thus, as a first step these methods query a 3D model repository to retrieve a subset of relevant models.
Most public shape repositories such as 3D Warehouse [Tri14] rely on the users to provide tags and names of the shapes with little additional quality control measures. As a result, the shapes are sparsely labeled with inconsistent and noisy tags. This motivates developing automatic algorithms to infer text associated with models. Existing work focuses on establishing class memberships for an entire shape (e.g. this shape is a chair), as well as inferring finerscale attributes (e.g. this chair has a rocking leg).
Classification
methods assign a class membership for unlabeled shapes. One approach is to retrieve for each unlabeled shape the most similar shape from a database of 3D models with known shape classes. There has been a large number of shape descriptors proposed in recent years that can be used in such a retrieval task, and one can refer to the survey of Tangelder et al. [TV08] for a thorough overview. One can further improve classification results by leveraging machine learning techniques to learn classifiers that are based on global shape descriptors [FHK04, GKF09]. Barutcuoglu et al. [BD06] demonstrate that Bayesian aggregation can be used to improve classification of shapes that are a part of a hierarchical ontology of objects. Bronstein et al.[BBOG11] leverage “bag of features” to learn powerful descriptorspace metrics for nonrigid shapes. These technique can be further improved by using sparse coding techniques [LBBC14].
Tag attributes
often capture finescale attributes of shapes that belong to the same class. These attributes can include presence or absence of particular parts, object style, or comparative adjectives. Huang et al. [HSG13] developed a framework for propagating these attributes in a collection of partially annotated 3D models. For example, only brown models in Figure 4 were labeled, and blue models were annotated automatically. To achieve automatic labeling, they start by coaligning all models to a canonical domain, and generate a voxel grid around the coaligned models. For each voxel they compute local shape features, such as spin images, for each shape. Then, they learn a distance metric that best discriminates between different tags. All shapes are finally embedded in a weighted feature space where nearest neighbors are connected in a graph. A graph cut clustering is used to assign tags to unlabeled shapes.
While above method works well for discrete tags, it does not capture more continuous relations, such as animal A is more dangerous than animal B. Chaudhuri et al. [CKG13]
focus on estimating ranking based on comparative adjectives. They ask people to compare pairs of shape parts with respect to different adjectives, and use a Support Vector Machine ranking method to predict attribute strengths from shape features for novel shapes (Figure
5).While the techniques described above are suitable for retrieving related models, most of the described method are not designed to understand intraclass variations. Usually a more involved structural analysis is necessary to understand higherlevel semantic properties of shapes. Even for inferring tag attributes existing works relies on shape matching [HSG13] or shape segmentation [CKG13]. The following two sections will focus on inferring these higherlevel structural properties in collections of shapes.
Segmentation  Learning  Type of  PSB rand index (# train.  LPSB accuracy (# train.  COSEG 

method  type  manual input  shapes if applicable)  shapes if applicable)  accuracy 
[KHS10]  supervised  labeled shapes  9.4% (19) / 14.8% (3)  95.3% (19) / 89.2% (3)  unknown 
[BLVD11]  supervised  segmented shapes  8.8% (19) / 9.7% (6)  not applicable  not applicable 
[HKG11]  unsupervised  none  10.1%  not applicable  not applicable 
[SvKK11]  unsupervised  none  unknown  unknown  87.7% 
[vKTS11]  supervised  labeled shapes  unknown  8̃8.7% (12), see caption  unknown 
[HFL12]  unsupervised  none  unknown  88.5%  91.4% 
[LCHB12]  semisupervised  labeled shapes  unknown  92.3% (3)  unknown 
[WAvK12]  semisupervised  link constraints  unknown  unknown  ‘close to errorfree’ 
[WGW13]  supervised  labeled images  unknown  8̃8.0% (19), see caption  unknown 
[KLM13]  semi/unsupervised  box templates  unknown  unknown  92.7% (semisuperv.) 
[HWG14]  unsupervised  none  unknown  unknown  90.1% 
[XSX14]  supervised  labeled shapes  10.0%  86.0%  unknown 
[XXLX14]  supervised  labeled shapes  10.2% (19)  94.2 (19) / 88.6 (5)  unknown 
4 Datadriven Shape Segmentation
The goal of datadriven shape segmentation is to partition the shapes of an input collection into parts, and also estimate part correspondences across these shapes. We organize the literature on shape segmentation into the following three categories: supervised segmentation, unsupervised segmentation, and semisupervised segmentation following the main classification discussed in Section 2. Table 1 summarizes representative techniques and reports their segmentation and part labeling performance based on established benchmarks. Table 2 reports characteristic running times for the same techniques.
4.1 Supervised shape segmentation
Classification techniques.
Supervised shape segmentation is frequently formulated as a classification problem. Given a training set of shapes containing points, faces or patches that are labeled according to a part category (see Figure 3), the goal of a classifier is to identify which part category other points, faces, or patches from different shapes belong to. Supervised shape segmentation is executed in two steps: during the first step, the parameters of the classifier are learned from the training data. During the second step, the classifier is applied on new shapes. A simple linear classifier has the form:
(1) 
where is a geometric feature of a point (face, or patch), such as the ones discussed in Section 2. The parameters serve as weights for each geometric feature. The function
is nonlinear and maps to a discrete value (label), which is a part category, or to probabilities per category. In general, choosing a good set of geometric features that help predicting part labels, and employing classifiers that can discriminate the input data points correctly are important design choices. There is no rule of thumb on which is the best classifier for a problem. This depends on the underlying distribution and characteristics of the input geometric features, their dimensionality, amount of labeled data, existence of noise in the labeled data or shapes, training and test time constraints  for a related discussion on how to choose a classifier for a problem, we refer the reader to
[MRS08]. Due to the large dimensionality and complexity of geometric feature spaces, nonlinear classifiers are more commonly used. For example, to segment human bodies into parts and recognize poses, the Microsoft’s Kinect uses a random forest classifier trained on synthetic depth images of humans of many shapes and sizes in highly varied poses sampled from a large motion capture database
[SFC11] (Figure 6).Structured models.
For computer graphics applications, it is important to segment shapes with accurate and smooth boundaries. For example, to help the user create a new shape by recombining parts from other shapes [FKS04], irregular and noisy segmentation boundaries can cause problems in the part attachment. From this aspect, using a classifier per point/face independently is usually not enough. Thus, it is more common to formulate the shape segmentation problem as an energy minimization problem that involves a unary term assessing the consistency of each point/face with each part label, as well as a pairwise term assessing the consistency of neighboring points/faces with pairs of labels. For example, pairs of points that have low curvature (i.e., are on flat surface) are more likely to have the same part label. This energy minimization formulation has been used in several singleshape and datadriven segmentations (unsupervised or supervised) [KT03, ATC05, SSSss, KHS10]. In the case of supervised segmentation [KHS10], the energy can be written as:
(2) 
where
is a vector of random variables representing the part label per point (or face)
, is its geometric feature vector, are indices to points (or faces) that are considered neighbors, is a geometric feature vector representing dihedral angle, angle between normals, or other features, and are the energy parameters. The important difference of supervised datadriven methods with previous singleshape segmentation methods is that the parameters are automatically learned from the training shapes to capture complex feature space patterns per part [ATC05, KHS10]. We also note that the above energy of Equation 2, when written in an exponentiated form and normalized, can be treated as a probabilistic graphical model [KF09], called Conditional Random Field [LMP01]that represents the joint probability distribution over part labels conditioned on the input features:
(3) 
where is a normalization factor, also known as partition function. Minimizing the energy of Equation 2, or correspondingly finding the assignment
that maximizes the above probability distribution is known as a Maximum A Posteriori inference problem that can be solved in various manners, such as graph cuts, belief propagation, variational or linear programming relaxation techniques
[KF09].The parameters can be jointly learned through maximum likelihood (ML) or maximum a posteriori (MAP) estimates [KF09]. However, due to high computational complexity of ML or MAP learning and the nonlinearity of classifiers used in shape segmentation, it is common to train the parameters and of the model separately i.e., train the classifiers of the unary and pairwise term separately [SM05]. The exact form of the unary and pairwise terms vary across supervised shape segmentation methods: the unary term can have the form of a loglinear model [ATC05], cascade of JointBoost classifiers [KHS10], Gentleboost [vKTS11]
, or feedforward neural networks
[XXLX14]. The pairwise term can have the form of a learned loglinear model [ATC05], labeldependent GentleBoost classifier [KHS10], or a smoothness term based on dihedral angles and edge length tuned by experimentation [SSSss, vKTS11, XXLX14]. Again the form of the unary and pairwise terms depend on the amount of training data, dimensionality and underlying distribution of geometric features used, and computational cost.Segmentation  Reported  Dataset size for  Reported 

method  running times  reported running times  processor 
[KHS10]  8h train. / 5 min test.  6 train. shapes / 1 test shape  Intel Xeon E5355 2.66GHz 
[BLVD11]  10 min train. / 1 min test.  unknown for train. / 1 test shape  Intel Core 2 Duo 2.99GHz 
[HKG11]  32h  380 shapes  unknown, 2.4 GHz 
[SvKK11]  10 min  30 shapes  AMD Opteron 2.4GHz 
[vKTS11]  10h train. / few min test.  2030 train. shapes / 1 test shape  AMD Opteron 1GHz 
[HFL12]  8 min (excl. feat. extr.)  20 shapes  Intel dualcore 2.93GHz 
[LCHB12]  7h train. / few min test.  20 shapes  Intel I7 2600 3.4GHz 
[WAvK12]  7 min user interaction  28 shapes  unknown 
[WGW13]  1.5 min (no train. step)  1 test shape  unknown 
[KLM13]  11h  7442 shapes  unknown 
[HWG14]  33h  8401 shapes  unknown, 3.2GHZ 
[XSX14]  30 sec (no train. step)  1 test shape  Intel I5 CPU 
[XXLX14]  15 sec train. (excl. feat. extr.)  6 train. shapes  Intel QuadCore 3.2 GHz 
Joint labeling.
Instead of applying the learned probabilistic model to a single shape, an alternative approach is to find correspondences between faces of pairs of shapes, and incorporate a third “intershape” term in the energy of Equation 2 [vKTS11]. The “intershape” term favors pairs of corresponding faces on different shapes to have the same label. As a result, the energy can be minimized jointly over a set of shapes to take into account any additional correspondences.
Boundary learning.
Instead of applying a classifier per mesh point, face or patch to predict a part label, a different approach is to predict the probability of each polygon mesh edge to serve as a segmentation boundary or not [BLVD11]. The problem can be formulated as a binary classifier (e.g., Adaboost) that is trained from human segmentation boundaries. The input to the classifier are geometric features of edges, such as dihedral angles, curvature, and shape diameter and the output is a probability for an edge to be a segmentation boundary. Since the predicted probabilities over the mesh does not correspond to closed smooth boundaries, a thinning and an active contour model [KWT88] are used as postprocessing to produce the final segmentations.
Transductive segmentation.
Another way to formulate the shape segmentation problem is to group patches on a mesh such that the segment similarity is maximized between the resulting segments and the provided segments in the training database. The segment similarity can be measured as the reconstruction cost of the resulting segment from the training ones. The grouping of patches can be solved as an integer programming problem [XSX14].
Shape segmentation from labeled images.
Instead of using labeled training shapes for supervised shape segmentation, an alternative source of training data can come in the form of segmented and labeled images, as demonstrated by Wang et al. [WGW13]. Given an input 3D shape, this method first renders 2D binary images of it from different viewpoints. Each binary image is used to retrieve multiple training segmented and labeled images from an input database based on a biclass Hausdorff distance measure. Each retrieved image is used to perform label transfer to the 2D shape projections. All labeled projections are then backprojected onto the input 3D model to compute a labeling probability map. The energy function for segmentation is formulated by using this probability map in the unary term expressed per face or point, while dihedral angles and Euclidean distances are used in the pairwise term.
4.2 Semisupervised shape segmentation
Entropy regularization.
The parameters of Equation 2 can be learned not only from the training labeled shapes, but also from the unlabeled shapes [LCHB12]. The idea is that learning should maximize the likelihood function of the parameters over the labeled shapes, and also minimize the entropy (uncertainty) of the classifier over the unlabeled shapes (or correspondingly maximize the negative entropy). The idea is that minimizing the entropy over unlabeled shapes encourages the algorithm to find putative labelings for the unlabeled data [JWL06]. However, it is generally hard to strike a balance between the likelihood and entropy terms.
Metric embedding and active learning.
A more general formulation for semisupervised segmentation was presented in [WAvK12]. Starting from a set of shapes that are cosegmented in an unsupervised manner [SvKK11], the user interactively adds two types of constraints: “mustlink” constraints, which specify that two patches (superfaces) should belong to the same cluster, and “cannotlink” constraints which specify that two patches must be in different clusters. These constraints are used to perform constrained clustering in an embedded feature space of superfaces coming from all the shapes of the input dataset. The key idea is to transform the original feature space, such that superfaces with “mustlink” constraints come closer together to form a cluster in the embedded feature space, while superfaces with “cannotlink” constraints move away from each other. To minimize the effort required from the user, the method suggests the user pairs of points in feature space that when constrained are likely to improve the cosegmentation. The suggestions involve points that are far from their cluster centers, and have a low confidence of belonging to their clusters.
Template fitting.
A different form of partial supervision can come in the form of partbased templates. Kim et al.’s method [KLM13] allows users to specify or refine a few templates made out of boxes representing expected parts in an input database. The boxes iteratively fit to the shapes of a collection through simultaneous alignment, surface segmentation and pointtopoint correspondences estimated between each template and each input shape. Alternatively, the templates can be inferred automatically from the shapes of the input collection without human supervision based on single shape segmentation heuristics. Optionally, the user can refine and improve these estimated templates. From this aspect, Kim et al.’s method can run in either semisupervised or unsupervised method. It was also the first method to handle segmentation and correspondences in collections with size in the order of thousands of shapes.
4.3 Unsupervised segmentation
Unsupervised datadriven shape segmentation techniques fall into two categories: clustering based techniques and matching based techniques. In the following, we highlight the key idea of each type of approaches.
Clustering
based techniques are adapted from supervised techniques. They compute feature descriptors on points or faces. Clustering is performed over all points/faces over all shapes. Each resulting cluster indicates a consistent segment across the input shapes. The promise of the clustering based approach is that when the number of shapes becomes large, the sampling density in the clustering space becomes dense enough, so that certain statistical assumptions are satisfied, e.g., diffusion distances between points from different clusters is significantly larger than those between points within each cluster. When these assumptions are satisfied, clustering based approach can produce results that are comparable to supervised techniques (c.f. [HFL12]). In addition, clustering method being employed play an important role in the segmentation results. In [SvKK11]
, the authors utilize spectral clustering to perform clustering. In
[HFL12], the authors employ subspace clustering, a more advanced clustering method, to obtain improved results.Another line of unsupervised methods pursues clustering of parts. In [XLZ10], the authors perform coanalysis over a set of shapes via factoring out the part scale variation by grouping the shapes into different styles, where style is defined by the anisotropic part scales of the shapes. In [vKXZ13], the authors introduce unsupervised cohierarchical analysis of a set of shapes. They propose a novel clusterandselect scheme for selecting representative part hierarchies for all shapes and grouping the shapes according to the hierarchies. The method can be used to compute consistent hierarchical segmentation for the input set.
Matching
based methods [GF09, HKG11, WHG13, HWG14] build maps across shapes and utilize these maps to achieve consistency of segmentations. As shown in Figure 7, this strategy allows us to identify meaningful parts despite the lack of strong geometric cues on a particular shape. Likewise, the approach is able to identify coherent single parts even when the geometry of the individual shape suggests the presence of multiple segments. A challenge here is to find a suitable shape representation so that maps across diverse shapes are welldefined. In [HKG11], Huang et al. introduce an optimization strategy that jointly optimizes shape segmentations and maps between optimized segmentations. Since the maps are defined at the partlevel, this technique is suitable for heterogeneous shape collections. Experimentally, it generates comparable results with supervised method [KHS10] on the Princeton segmentation benchmark. Recently, Huang et al.[HWG14] formulates the same idea under the framework of functional maps [OBCS12] and gain improved segmentation quality and computational efficiency.
5 Joint Shape Matching
Another fundamental problem in shape analysis is shape matching, which finds relations or maps between shapes. These maps allow us to transfer information across shapes and aggregate information from a collection of shapes for a better understanding of individual shapes (e.g., detecting shared structures such as skeletons or shape parts). They also provide a powerful platform for comparing shapes (i.e., with respect to different measures and at difference places). As we can see from other sections, shape maps are widely applied in shape classification and shape exploration as well.
So far most existing research in shape matching has focused on matching pairs of shapes in isolation. We refer to [vKZHCO11] for a survey and to [LH05, LF09, vKZHCO11, OMMG10, KLF11, OBCS12] for recent advances. Although significant progress has been made, stateoftheart techniques are limited to shapes that similar to each other. On the other hand, they tend to be insufficient for shapes that undergo large geometric and topological variations.
The availability of large shape collections offers opportunities to address this issue. Intuitively, when matching two dissimilar shapes, we may utilize intermediate shapes to transfer maps. In other words, we can build maps between similar shapes, and use the composite maps to obtain maps between less similar shapes. As we will see shortly, this intuition can be generalized to enforcing a cycleconsistency constraint, namely composite maps along cycles should be identity map or the composite map between two shapes is pathindependent. In this section, we discuss joint shape matching techniques that take a shape collection and initial noisy maps computed between pairs of shapes as input, and output improved maps across the shape collection.
5.1 Model Graph and CycleConsistency
To formulate the joint matching problem, we consider a model graph (c.f. [Hub02]). The vertex set consists of the input shapes. The edge set characterizes the pairs of shapes that are selected for performing pairwise matching. For smallscale datasets, we typically match all pairs of shapes. For largescale datasets, the edge set usually connects shapes that are similar according to a predefined shape descriptor [KLM12, HSG13], thus generating a sparse shape graph.
The key component of a joint matching algorithm is to utilize the socalled cycleconsistency constraint. Specifically speaking, if all the maps in are correct, then composite maps along any loops should be identity maps. This is true for maps that are represented as transformations (e.g., rotations and rigid/affine transformations), or full pointwise maps that can be described as permutation matrices). We can easily modify the constraint to handle partial maps, namely each point, when transformed along a loop, either disappears or goes back to the original point (See [HWG14] for details).
The cycleconsistency constraint is useful because the initial maps, which are computed between pairs of shapes in isolation, are not expected to satisfy the cycle consistency constraint. On the other hand, although we do not know which maps or correspondences are incorrect, we can detect inconsistent cycles. These inconsistent cycles provide useful information for us to detect incorrect correspondences or maps, i.e., an inconsistent cycle indicates that at least one of the participating maps or correspondences is incorrect. To turn this observation into algorithms, one has to formulate the cycleconsistency constraint properly. Existing works in datadriven shape matching fall into two categories: combinatorial techniques and matrix recovery based techniques. The reminder of this section provides the details.
5.2 Combinatorial Techniques
Spanning tree optimization. Earlier works in joint matching aim at finding a spanning tree in the model graph. In [GMB04, HFG06], the authors propose to use the maximum spanning tree (MST) of the model graph. However, this strategy can easily fail since a single incorrect edge in the MST may break the entire matching result. In the seminal work [Hub02], Huber showed that finding the best spanning tree maximizing the number of consistent edges is NPhard. Although finding the best spanning tree is not tractable, Huber introduced several local operations for improving the score of spanning trees. However, these approaches are generally limited to smallscale problems so that the search space can be sufficiently explored.
Inconsistent cycle detection. Another line of approaches [ZKP10, RSSS11, NBCW11] applies global optimization to select cycleconsistent maps. These approaches are typically formulated as solving constrained optimization problems, where objective functions encode the scores of selected maps, and constraints enforce the consistency of selected maps along cycles. The major advantage of these approaches is that the correct maps are determined globally. However, as the cycle consistency constraint needs to apportion blame along many edges on a cycle, the success of these approaches relies on the assumption that correct maps are dominant in the model graph so that the small number of bad maps can be identified through their participation in many bad cycles.
MRF formulation. Joint matching may also be formulated as solving a second order Markov Random Field (or MRF) [CAF10b, CAF10a, COSH11, HZG12]. The basic idea is to sample the transformation/deformation space of each shape to obtain a candidate set of transformation/deformation samples per shape. Joint matching is then formulated as optimizing the best sample for each shape. The objective function considers initial maps. Specifically, each pair of samples from two different shapes would generate a candidate map between them. The objective function then formulates secondorder potentials, where each term characterize the alignment score between these candidate maps and the initial maps [HSG13, HZG12].
The key challenge in the MRF formulation is generating the candidate samples for each shape. The most popular strategy is to perform uniform sampling [COSH11, HSG13], which works well when the transformation space is lowdimensional. To apply the MRF formulation on highdimensional problems, Huang et al. [HZG12] introduce a diffusionandsharpening strategy. The idea is to diffuse the maps among the model graph to obtain rich samples of candidate transformations or correspondences and then perform clustering to reduce the number of candidate samples.
5.3 Matrix Based Techniques
A recent trend in map computation is to formulate joint map computation as inferring matrices [SW11, KLM12, HZG12, WS13, HG13, CGH14, HWG14]. The basic idea is to consider a big map collection matrix
where each block encodes the map from shape to shape . In this matrix representation, the cycleconsistency constraint can be equivalently described as simple properties of , i.e., depending on the types of maps, is either positive semidefinite or lowrank (c.f. [HG13, HWG14]). In addition, we may view the initial pairwise maps as noisy measurements of the entries of . Based on this perspective, we can formulate joint matching as matrix recovery from noisy measurements of its entries.
Spectral techniques. The initial attempts in matrix recovery are spectral techniques and their variants [SW11, KLM12, WHG13]. The basic idea is to consider the map collection that encodes initial maps in its blocks. Then the recovered matrix is given by , where
are given singular value decomposition (or SVD) of
. Various methods have added heuristics on top of this basic procedure. For example, Kim et al. [KLM12] use the optimized maps to recompute initial maps.This SVD strategy can be viewed as matrix recovery because is equivalent to the optimal lowrank approximation of
(with given rank) under the matrix Frobenius norm. However, as the input maps may contain outliers, employing the Frobenius norm for matrix recovery is suboptimal. Moreover, it is hard to analyze these techniques, even in the very basic setting where maps are given by permutation matrices
[PKS13].Pointbased maps. In a series of works, Huang and coworkers [HG13, CGH14, HCG14] consider the case of pointbased maps and develop joint matching algorithms that admit theoretical guarantees. The work of [HG13] considers the basic setting of permutation matrix maps and proves the equivalence between cycleconsistent maps and the lowrank or positive semidefiniteness of the map collection matrix. This leads to a semidefinite programming formulation for joint matching. In particular, L1 norm is used to measuring the distance between the recovered maps and the initial maps. The authors provide exact recovery conditions, which state that the groundtruth maps can be recovered if the percentage of incorrect correspondences in the input maps is below a constant. In a followup work, Chen et al. [CGH14] extends it to partial maps and provide a better analysis in the case where incorrect correspondences in the input maps are random. The computational issue is addressed in [HCG14], which employs alternating direction of multiplier methods for optimization.
Rotations and functional maps. Maps that are represented by general matrices (e.g., rotations or functional maps) can also be handled in a similar fashion. In [WS13], Wang and Singer consider the case of rotations between objects. Their formulation is similar to [HG13] but utilize a L1 Frobenius norm for measuring the distance between initial rotations and recovered rotations. Recently, Huang et al. [HWG14] extend the idea to functional maps. The major difference between functional maps and pointbased maps or rotations is that the map collection matrix is nolonger symmetric. Thus, their method is formulated to recover lowrank matrices.
5.4 Discussion and Future Directions
The key to a joint shape matching algorithm is to have a proper formulation of the cycleconsistency constraint. We have witnessed the evolution from earlier works on combinatorial search and detecting inconsistent cycles to more recent works on spectral techniques, MRF based methods and matrix recovery techniques. In particular, matrix recovery techniques admit theoretical guarantees. They provide fundamental understanding of why joint shape matching can improve from isolated pairwise matching.
One future direction is to integrate pairwise matching and joint matching into one optimization problem. Since the major role of joint matching is to remove the noise presented in pairwise matching, it makes sense to perform them together. Such unified approaches have the potential to further improve from decomposed approaches (i.e., from pairwise to joint). The technical challenge is to find map representations so that pairwise matching and map consistency can be formulated in the same framework.
6 DataDriven Shape Reconstruction
Reconstructing geometric shapes from physical objects is a fundamental problem in geometry processing. The input to this problem is usually a point cloud produced by aligned range scans, which provides an observation of an object. The goal of a shape reconstruction algorithm is to convert this point cloud into a highquality geometric model. In practice, the input point cloud data is noisy and incomplete, thus the key to a successful shape reconstruction algorithm is formulating appropriate shape priors. Traditional shape reconstruction algorithms usually utilize generic priors, such as surface smoothness [DTB06], and typically assume that the input data captures most of the object’s surface. To handle higher degree of noise and partiality of the input data, it is important to build structural shape priors.
Datadriven techniques tackle this challenge by leveraging shape collections to learn strong structural priors from similar objects, and use them to reconstruct highquality 3D models. Existing approaches fall into two categories, based on how they represent the shape priors: parametric and nonparametric. The former usually builds a lowdimensional parametric representation of the underlying shape space, learning the representation from exemplars and enforcing the parameterization when reconstructing new models. Parametric methods typically require building correspondences across the exemplar shapes. In contrast, nonparametric methods directly operate on the input shapes by copying and deforming existing shapes or shape parts, which are designed for shapes with large variations, such as manmade objects.
6.1 Parametric Methods
Morphable face. A representative work in parametric datadriven shape reconstruction is the morphable face model [BV99], which is designed for reconstructing 3D textured faces from photos and scans. The model is learned from a dataset of prototypical 3D shapes of faces, and the model can then be used to derive a 3D face model from a novel image and to modify shape and texture in a natural way (See Figure9).
In particular, the morphable face model represents the geometry of a face with shapevector , that contains the 3D coordinates of its vertices. Similarly, it encodes the texture of a face by a texturevector , that contains the RGB color values of the corresponding vertices. A morphable face model is then constructed using a database of exemplar faces, each represented by its shapevector and . In [BV99] the exemplar faces are constructed by matching a template to scanned human faces.
The morphable face model uses Principal Component Analysis (PCA) to characterize the shape space. A new shape and its associated texture are given by
where and are the meanshape and meantexture, respectively, and and
are eigenvectors of covariance matrices.
and are coefficients. PCA also gives probability distributions over coefficients. The probability for coefficients is given bywith
being the eigenvalues of the shape covariant matrix
(the probability is computed in a similar way).With this morphable face model, reconstruction of textured models can be posed as a smallscale nonlinear optimization problem. For example, given a 2D image of a human face , one can reconstruct the underlying textured 3D model by searching for a similar rendered face , parameterized by the shape and texture coefficients and , and the rendering parameters (e.g., camera configuration, lighting parameters). The optimization problem is formulated as minimizing a data term, which measures the distance between the input image and the rendered image, and regularization terms that are learned from exemplar faces. The success of the morphable model relies on lowdimensionality of the solution space, thus this method was applied to several other data sets where this assumption holds, such as human bodies and poses.
Morphable human bodies. Allen et al. [ACP03] generalize morphable model to characterize human bodies (Figure 10). Given a set of scanned human bodies, the method first performs nonrigid registration to fit a holefree, artistgenerated mesh (template) to each of these scans. The result is a set of mutually consistent parameterized shapes based on the corresponding vertex positions originating from the template. Similar to [BV99], the method employs PCA to characterize the shape space, which enables applications in shape exploration, synthesis and reconstruction.
In addition to variations in body shapes, human models exhibit variations in poses. The SCAPE model (Shape Completion and Animation for PEople) [ASK05] addresses this challenge by learning separate models of body deformation – one accounting for variations in poses and one accounting differences in body shapes among humans. The pose deformation component is acquired from a set of dense 3D scans of a single person in multiple poses. A key aspect of the pose model is that it decomposes deformation into a rigid and a nonrigid component. The rigid component is modeled using a standard skeleton system. The nonrigid component, which captures remaining deformations such as flexing of the muscles, associates each triangle with a local affine transformation matrix. These transformation matrices are learned from exemplars using a joint regression model. In [HSS09], Hasler et al. introduce a unified model for parameterizing both shapes and poses. The basic idea is to consider the relative transformations between all pairs of neighboring triangles. These transformation matrices allow us to reconstruct the original shape by solving a least square problem. In this regard, each shape is encoded as a set of edgewise transformation matrices, which are fit into the PCA framework to obtain a statistical model of human shapes. The model is further used to estimate shapes of dressed humans from range scans [HSR09].
Recent works on statistical human shape analysis focus on combing learned shape priors with sparse observations and special effects. In [TMB14], the authors introduce an approach that reconstruct highquality shapes and poses from a sparse set of markers. The success of this approach relies on learning meaningful shape priors from a database consists of thousands of shapes. In [LMB14], the authors study how to understand human breathing from acquired data.
Datadriven tracking. Another problem in shape reconstruction is object tracking, which aims at creating and analyzing dynamic shapes and/or poses of physical objects. Successful tracking techniques (e.g., [WLVGP09, WBLP11, LYYB13, CWLZ13, CHZ14]) typically utilize parametric shape spaces. These reduced shape spaces provide shape priors that improve both the efficiency and robustness of the tracking process. The way to utilize and construct shape spaces vary in different settings, and are typically tailored to the specific problem setting. Weise et al. [WLVGP09] utilize a linear PCA subspace trained with a very large set of preprocessed facial expressions. This method requires an extended training session with a careful choice of facial action units. In addition, the learned face model is actorspecific. These restrictions are partially resolved in [LWP10], which introduces an examplebased blendshape optimization technique, involving only a limited number of random facial expressions. In [WBLP11], the authors combine both blendshapes and datadriven animation priors to improve the tracking performance. In a recent work, Li et al. [LYYB13] employs adaptive PCA to further improve the tracking performance on nuanced emotions and microexpression. The key idea is to combine a general blendshape PCA model and a corrective PCA model that is updated onthefly. This corrective PCA model captures the details of the specific actor and missing deformations from the initial blendshape model.
6.2 NonParametric Methods
Parametric methods require canonical domains to characterize the shape space, which have been so far demonstrated in domains of organic shapes, such as body shapes or faces. In this section, we discuss another category of methods that have shown the potential to handle more diverse shape collections.
Generally speaking, a nonparametric datadriven shape reconstruction method utilizes a collection of relevant shapes and combines three phases, i.e., a query phase, a transformation phase and a assembly phase. Existing methods differ in how the input shape collection is preprocessed and how these phases are performed.
Examplebased scan completion.
Pauly et al. [PMG05] introduce one of the first nonparametric systems. As shown in [PMG05], the method takes an input point cloud and a collection of complete objects as input. The reconstruction procedure reveals all three phases described above. The first phase determines a set of similar objects. The retrieval phase combines both textbased search, PCA signatures and is refined by rigid alignment. The second step performs nonrigid alignment between the retrieved shapes and the input point cloud. This step partitions the input point cloud into a set of patches, where each patch is associated with one retried shape (via the corresponding region). The final phase merges the corresponding regions into a unified shape.
Nan et al. [NXS12] introduce a similar system for indoor scene reconstruction. Given an input point cloud of an indoor scene that consists of a set of objects with known categories, the method searches in a database of 3D models to find matched objects and then deforms them in a nonrigid manner to fit the input point cloud. Note that this method treats complete 3D objects as building blocks, so the final reconstruction does not necessarily reflect the original scene.
In contrast to considering entire 3D shapes, Gal et al. [GSH07] utilizes a dictionary of local shape priors (defined as patches) for shape reconstruction. The method is mainly designed for enhancing shape features, where each region of an input point cloud is matched to a shape patch in the database. The matched shape patch is then used to enhance and rectify the local region. Recently, Mattausch et al. [MPM14] introduce a patchbased reconstruction system for indoor scenes. Their method considers recognizing and fitting planar patches from point cloud data.
Shen and coworkers [SFCH12] extends the idea for single object reconstruction, by assembling object parts. Their method utilizes consistently segmented 3D shapes as the database. Given a scan of an object, it recursively search parts in the database to assemble the original object. The retrieval phase considers both the geometric similarity between the input and the retrieved parts and the part compatibility learned from the input shapes.
Datadriven SLAM.
Nonparametric methods have also found applications in reconstructing temporal geometric data (e.g., the output of the Kinect scanner). A notable technique is simultaneous localization and mapping (or SLAM) method, which jointly estimates the trajectory of the scanning device and the geometry of the environment. In this case, shape collections serve us priors for the objects in the environment, which could be used to train object detectors. For example, the SLAM++ system proposed by SalasMoreno et al. [SMNS13] trained domain specific object detectors from shape collections. The learned detectors are integrated inside the SLAM framework to recognize and track those objects. Similarly Kim et al. [KMYG12] use learned object models to reconstruct dense 3D models from a single scan of an indoor scene. More recently, Sun et al. [SX14] introduce 3D sliding window object detector with improved performance and broader range of objects.
Shapedriven reconstruction from images. Recently, there is a growing interest in reconstructing 3D objects directly from images (e.g., [XZZ11, KSES14, AME14, SHM14]). This problem introduces fundamental challenges in both querying similar objects and deforming objects/parts to fit the input object. In terms of searching similar objects, successful methods typically render objects in the database from a dense of viewpoints and pick objects, where one view is similar to the input image object. Since the depth information is missing from the image object, it is important to properly regularize 3D object transformations. Since otherwise a 3D object maybe deformed arbitrarily even though its projection on the image domain matches the image object. Most existing techniques consider rigid transformations or userspecified deformations [XZZ11]. In a recent work, Su et al. [SHM14] propose to learn meaningful deformations of each shape from its optimal deformations to similar shapes.
7 Datadriven Shape Modeling and Synthesis
So far, the creation of detailed threedimensional content remains a tedious task confined with skilled artists. 3D content creation has been a major bottleneck hindering the development of ubiquitous 3D graphics. Thus, providing easytouse tools for casual and novice users to design and create 3D models has been a key challenge in computer graphics. To address this challenge, current literature has been focused on two main directions, i.e., intelligent interfaces for interactive shape modeling and smart models for automated model synthesis. The former strives to endow modeling interfaces with higherlevel understanding of the structure and semantics of 3D shapes, allowing the interface to reason around the incomplete shape being modeled. The latter direction focuses on developing datadriven models to synthesize new shapes automatically. The core problem is to learn generative shape models from a set of exemplars (e.g., probability distributions, fitness functions, functional constraints etc) so that the synthesized shapes are plausible and novel. It can be seen that both of the two paradigms depend on datadriven modeling of shape structures and semantics. With the availability of large 3D shape collections, datadriven approach seems a promising breakthrough to the content creation bottleneck.
7.1 Interactive Shape Modeling and Editing
Interactive 3D modeling software (3DS Max, Maya, etc.) provide the artists with a big set of powerful tools for creating and editing very detailed 3D models, which are, however, often onerous to harness for nonprofessional users. For casual users, more intuitive modeling interfaces with certain intelligence are preferred. Below we discuss such methods for assemblybased modeling and guided shape editing.
Datadriven part assembly.
Early works on 3D modeling based on shape sets are primarily driven by the purpose of content reuse in partassembly based modeling approaches. The seminal work of modeling by example [FKS04] presents a pioneering system of shape modeling by searching a shape database for parts to reuse in the construction of new shapes. Kraevoy et al. Kreavoy:2007:MIC describe a system for shape creation via interchanging parts between a small set of compatible shapes. Guo et al. [GLXJ14] propose assemblybased creature modeling guided by a shape grammar.
Beyond content reuse through database queries or handcrafted rules, Chaudhuri and Koltun Chaudhuri:2010:ddsc propose a datadriven technique for suggesting the modeler with shape parts that can potentially augment the current shape being built. Such part suggestions are generated through retrieving a shape database based on partial shape matching. Although this is a purely geometric method without accounting for the semantics about shape parts, it represents the first attempt on utilizing shape database to augment the modeling interface. Later, Chaudhuri et al. Chaudhuri:2011:prabm show that the incorporation of semantic relationships increases the relevance of presented parts. Given a repository of 3D shapes, the method learns a probabilistic graphical model encoding semantic and geometric relationships among shape parts. During modeling, inference in the learned Bayesian network is performed to produce a relevance ranking of the parts.
A common limitation of the above techniques is that they do not provide a way to directly express a highlevel design goal (e.g. “create a cute toy”). Chaudhuri et al. Chaudhuri:2013:ACC proposed a method that learns semantic attributes for shape parts that reflect the highlevel intent people may have for creating content in a domain (e.g. adjectives such as “dangerous”, “scary” or “strong”) and ranks them according to the strength of each learned attribute (Figure 5). During an interactive session, the user explores and modifies the strengths of semantic attributes to generate new part assemblies.
3D shape collections can supply other useful information, such as contextual and spatial relationships between shape parts, to enhance a variety of modeling interfaces. Xie et al. [XXM13] propose a datadriven sketchbased 3D modeling system. In the offline learning stage, a shape database is preanalyzed to extract the contextual information among parts. During the online stage, the user designs a 3D model by progressively sketching its parts and retrieving and assembling shape parts from the database. Both the retrieval and assembly are assisted by the precomputed contextual information so that more relevant parts can be returned and selected parts can be automatically placed. Inspired by the ShadowDraw system [LZC11], Fan et al. [FWX13] propose 3D modeling by drawing with datadriven shadow guidance. The user’s strokes are used to query a 3D shape database for generating the shadow image, which in turn can guide the user’s drawing. Along the drawing, 3D candidate parts are retrieved for assemblybased modeling.
Datadriven editing and variation.
The general idea of datadriven shape editing is to learn from a collection of closely related shapes a model that characterize the plausible variation or deformation of the shapes, and use the learned model to constrain the user’s edit to maintain plausibility. For organic shapes, such as human faces [BV99, CWZ14] or bodies [ACP03], parametric models can be learned from a shape set characterizing its shape space. Such parametric models can be used to edit the shapes through exploring the shape space with the set of parameters.
An alternative approach is the analyzeandedit paradigm that is widely adopted to first extract the structure from the input shape and then try to preserve the structure through constraining the editing [GSMCO09]. Instead of learning structure from a single shape, which usually relies on priorknowledge, Fish et al. [FAvK14]
learn it from a set of shapes belong to the same family, resulting in a set of geometric distributions characterizing the part arrangements. These distributions can be used to guide structurepreserving editing, where models can be edited while maintaining their familial traits. Yumer et al.
[YK14] extract coconstrained handles from a set of shapes for shape deformation. The handles are generated based on coabstraction [YK12] of the set of shapes and the deformation coconstraints are learned statistically from the set.Based on learned structure from a database of 3D models, Xu et al. [XZZ11] propose photoinspired 3D object modeling. Guided by the object in a photograph, the method creates a 3D model as a geometric variation of a candidate model retrieved from the database. Due to the preanalyzed structural information, the method addresses the illposed problem of 3D modeling from a single 2D image via structurepreserving 3D warping. The final result is structurally plausible and is readily usable for subsequent editing. Moreover, the resulting 3D model, although built from a single view, is structurally coherent from all views.
7.2 Automated Synthesis of Shapes
Many applications such as 3D games and films require large collections of 3D shapes for populating their environments. Modeling each shape individually can be tedious even with the best interactive tools. The goal of datadriven shape synthesis algorithms is to generate several shapes automatically with no or very little user supervision: user may only provide some preferences or highlevel specifications to control the shape synthesis. Existing methods achieve this task by using probabilistic generative models of shapes, evolutionary methods, or learned probabilistic grammars.
Statistical models of shapes.
The basic idea of these methods is to define a parametric shape space and then fit a probability distribution to the data points that represent the input exemplar shapes. Since the input shapes are assumed to be plausible and desired representatives of the shape space, highprobability areas of the shape space with tend to become associated with new, plausible shape variants. This idea was first explored in the context of parametric models [BV99, ACP03], discussed in Section 6
. By associating each principal component of the shape space defined by these methods with a Gaussian distribution, this distribution can be sampled to generate new human faces or bodies (Figure
10). Since the probability distribution of plausible shapes tend to be highly nonuniform in several shape classes, Talton et al. [TGY09]use kernel density estimation with Gaussian kernels to represent plausible shape variability. The method is demonstrated to generate new shapes based on tree and human body parametric spaces.
Shapes have structure i.e., shapes vary in terms of their type and style, different shape styles have different number and type of parts, parts have various subparts that can be made of patches, and so on. Thus, to generate shapes in complex domains, it is important to define shape spaces over structural and geometric parameters, and capture hierarchical relationships between these parameters at different levels. Kalogerakis et al. [KCKK12] (Figure 13) proposed a probabilistic model that represents variation and relationships of geometric descriptors and adjacency features for different part styles, as well as variation and relationships of part styles and repetitions for different shape styles. The method learns the model from a set of consistently segmented shapes. Part and shape styles are discovered based on latent variables that capture the underlying modes of shape variability. Instead of sampling, the method uses a search procedure to assemble new shapes from parts of the input shapes according to the learned probability distribution. Users can also set preferences for generating shapes from a particular shape style, with given part styles or specific parts.
Set evolution.
Xu et al. [XZCOC12] developed a method for generating shapes inspired by the theory of evolution in biology. The basic idea of set evolution is to define crossover and mutation operators on shapes to perform part warping and part replacement. Starting from an initial generation of shapes with part correspondences and builtin structural information such as interpart symmetries, these operators are applied to create a new generation of shapes. A selected subset from the generation is presented via a gallery to the user who provides feedback to the system by rating them. The ratings are used to define the fitness function for the evolution. Through the evolution, the set is personalized and populated with shapes that better fit to the user. At the same time, the system explicitly maintains the diversity of the population so as to prevent it from converging into an “elite” set.
Learned Shape Grammars.
Talton et al. [TYK12]
leverage techniques from natural language processing to learn probabilistic generative grammars of shapes. The method takes as input a set of exemplar shapes represented with a scene graph specifying parent/child relationships and relative transformations between labeled shape components. They use Bayesian inference to learn a probabilistic formal grammar that can be used to synthesize novel shapes.
8 Datadriven Scene Analysis and Synthesis
Analyzing and modeling indoor and outdoor environments has important applications in various domains. For example, in robotics it is essential for an autonomous agent to understand semantics of 3D environments to be able to interact with them. In urban planning and architecture, professionals build digital models of cities and buildings to validate and improve their designs. In computer graphics, artists create novel 3D scenes for movies and video games.
Growing numbers of 3D scenes in digital repositories provide new opportunities for datadriven scene analysis, editing, and synthesis. Emerging collections of 3D scenes pose novel research challenges that cannot be easily addressed with existing tools. In particular, representations created for analyzing collections of single models mostly focus on arrangement and relations between shape parts [MWZ14], which usually exhibit less variations than objects in scenes. Capturing scene structure poses a greater challenge due to looser spatial relations and a more diverse mixture of functional substructures.
Inferring scene semantics is a longstanding problem in image understanding, with many methods developed for object recognition [QT09], classification [SW10], inferring spatial layout [CCPS13], and other 3D information [FGH13] from a single image. Previous work demonstrates that one can leverage collections of 3D models to facilitate scene understanding in images [SLH12]. In addition, the RGBD scans that include depth information can be used as training data for establishing the link between 2D and 3D for modeldriven scene understanding [SKHF12]. Unfortunately, semantic annotations of images are not immediately useful for modeling and synthesizing 3D scenes, where priors have to be learned from 3D data.
In this section, we cover datadriven techniques that leverage collections of 3D scenes for modeling, editing, and synthesizing novel scenes.
Contextbased retrieval.
To address large variance in arrangements and geometries of objects in scenes, Fisher et al.
[FH10, FSH11] suggest to take advantage of local context. One of the key insights of their work is that collections of 3D scenes provide rich information about context in which objects appear. They show that capturing these contextual priors can help in scene retrieval and editing.Their system takes an annotated collection of 3D scenes as input, where each object in a scene is classified. They represent each scene as a graph, where nodes represent objects and edges represent relations between objects, such as support and surface contact. In order to compare scenes, they define kernel functions for pairs of nodes measuring similarity in object’s geometry, and for pairs of edges, measuring similarity in relations of two pairs of objects. They further define a graph kernel to compare pairs of scenes. In particular, they compare all walks of fixed length originating at all pairs of objects in both scene graphs, which loosely captures similarities of all contexts in which objects appear [FSH11]. They show that this similarity metric can be used to retrieve scenes. By comparing only paths originated at a particular object, they can retrieve objects for interactive scene editing.
Focal Points.
Measuring similarity of complex hybrid scenes such as studios composed of bedroom, living room, and dining room poses a challenge to graph kernel techniques since they only measure global scene similarity. Thus, Xu et al. Xu:2014:OHSC advocate analyzing salient subscenes, which they call focal points, to compare hybrid scenes, i.e., scenes containing multiple salient subscenes. Figure 14 shows an example of comparing complex scenes, where the middle scene is a hybrid one encompassing two semantically salient subscenes, i.e., bednightstands and TVtablesofa. The middle scene is closer to the left one when the bed and nightstands are focused on, and otherwise when the TVtablesofa combo is the focal point. Therefore, scene comparison may yield different similarity distances depending on the focal points.
Formally, a focal point is defined as a representative substructure of a scene which can characterize a semantic scene category. That means the substructure should reoccur frequently only within that category. Therefore, focal point detection is naturally coupled with the identification of scene categories via scene clustering. This poses coupled problems of detecting focal points based on scene groups and grouping scenes based on focal points. These two problems are solved via interleaved optimization which alternates between focal point detection and focalbased scene clustering. The former is achieved by mining frequent substructures and the latter uses subspace clustering, where scene distances are defined in a focalcentric manner. Inspired by work of Fisher et al. [FSH11] scene distances is computed using focalcentric graph kernels which are estimated from walks originating from representative focal points.
The detected focal points can be used to organize the scene collection and to support efficient exploration of the collection (see Section 9). Focalbased scene similarity can be used for novel applications such as multiquery scene retrieval where one may issue queries consisting of multiple semantically related scenes and wish to retrieve more scenes “of the same kind”.
Synthesis.
Given an annotated scene collection, one can also synthesize new scenes that have similar distribution of objects. The scene synthesis technique of Fisher et al. Fisher:2012:CSR learns two probabilistic models from the training dataset: (1) object occurrence, indicating which objects should be placed in the scene, and (2) layout optimization, indicating where to place the objects. Next, it takes an example scene, and then synthesizes similar scenes using the learned priors. It replaces or adds new objects using contextbased retrieval techniques, and then optimizes for object placement based on learned objecttoobject spatial relations. Synthesizing example scenes might be a challenging task, thus Xu et al. Xu:2013:S2S propose modeling 3D indoor scenes from 2D sketches, by leveraging a database of 3D scenes. Their system jointly optimizes for sketchguided coretrieval and coplacement of all objects.
Hierarchical scene annotation.
All aforementioned applications take an annotated collection of 3D scenes as an input. Unfortunately, most scenes in public repositories are not annotated and thus require additional manual labeling [FRS12]. Liu et al. Liu:2014:CCS address the challenge of annotating novel scenes. The key observation of their work is that understanding hierarchical structure of a scene enables efficient encoding of functional scene substructures, which significantly simplifies detecting objects and representing their relationships. Thus, they propose a supervised learning approach to estimate hierarchical structure for novel scenes. Given a collection of scene graphs with consistent hierarchies and labels, they train a probabilistic hierarchical grammar encoding the distributions of shapes, cardinalities, and spatial relationships between objects. Such grammar can then be used to parse new scenes: find segmentations, object labels, and hierarchical organization of objects consistent with the annotated collection (see Figure 15).
Challenges and opportunities.
The topic of 3D scene analysis is quite new and there are many open problems and research opportunities. The first problem is to efficiently characterize spatial relationships between objects and object groups. Most existing methods work with bounding box representation which is efficient to process, but not sufficiently informative to characterize objecttoobject relationships. For example, one cannot reliably determine the object enclosure relationship based on a bounding box. Recently, He et al. Zhao:2014:ISU propose to use biologicallyinspired bisector surface to characterize the geometric interaction between adjacent objects and index 3D scenes (Figure 16). Second, most existing techniques heavily rely on expert user supervision for scene understanding. Unfortunately, online repositories rarely have models with reliable object tags. Therefore there is a need for methods that could leverage scenes with partial and noisy annotations. Finally, the popularity of commodity RGBD cameras has significantly simplified the acquisition of indoor scenes. This emerging scanning technique opens space for new applications such as online scene analysis with high fidelity scanning and reconstruction. Availability of image data that come with RGBD scans also enables enhancing geometric representations with appearance information.
9 Exploration and Organization
The rapidly growing number and diversity of digital 3D models in large online collections (e.g., TurboSquid, Trimble 3D Warehouse, etc.) have caused an emerging need to develop algorithms and techniques that effectively organize these large collections and allow users to interactively explore them. For example, an architect can furnish a digital building by searching in databases organized according to furniture types, regions of interest and design styles, or an industrial designer can explore shape variations among existing products, when creating a new object. Most existing repositories only support textbased search, relying on userentered tags and titles. This approach suffers from inaccurate and ambiguous tags, often entered in different languages. While it is possible to try using shape analysis to infer consistent tags as discussed in Section 3, it is sometimes hard to convey stylistic and geometric variations using only text. An alternative approach is to perform shape, sketch, or imagebased queries, however, to formulate such search queries the user needs to have a clear mental model of the shape that should be retrieved. Thus, some researchers focus on providing tools for exploring shape collections. Unlike search, exploration techniques do not assume apriori knowledge of the repository content, and help the user to understand geometric, topological, and semantic variations within the collection.
Problem statement and method categorization.
Data exploration and organization is a classical problem in data analysis and visualization [PEP11]. Given a data collection, the research focuses on grouping and relating data points, learning the data variations in the collection, and organizing the collection into a structured form, to facilitate retrieval, browsing, summarization, and visualization of the data, based on some efficient interfaces or metaphor.
The first step to organizing model collections is to devise appropriate metrics to relate different data points. Various similarity metrics have been proposed in the past to relate entire shapes as well as local regions on shapes. In particular, previous sections of this document cover algorithms for computing global shape similarities (Section 3), partwise correspondences (Section 4), and pointwise correspondences (Section 5). In this section, we will focus on techniques that take advantage of these correlations to provide different interfaces for exploring and understanding geometric variability in collections of 3D shapes. We categorize the existing exploration approaches based on four aspects:

Metaphor: a user interface for exploring shape variations. We will discuss five basic exploration interfaces, ones that use proxy shapes (templates), regions of interest, probability plots, query shapes, or continuous attributes.

Shape comparison: techniques used to relate different shapes. We will discuss techniques that use global shape similarities, and part or point correspondences.

Variability: shape variations captured by the system. Most methods we will discuss rely on geometric variability of shapes or parts. Some techniques also take advantage of topological variability, that is variance in number of parts or how they are connected (or variance in numbers of objects and their arrangements in scenes).

Organization form:
a method to group shapes. We will discuss methods that group similar shapes to facilitate exploring intragroup similarities and intergroup variations, typically including clustering and hierarchical clustering.
Table 3 summarizes several representative works in terms of these aspects. In the remaining part of this section we list several recent techniques grouped based on the exploration metaphor.
Method  Meta.  Comp.  Var.  Org. 

[OLGM11]  temp.  simi.  geom.  n/a 
[KLM13]  temp.  part  both  cluster 
[AKZM14]  temp.  part  both  cluster 
[KLM12]  ROI  point  both  n/a 
[ROA13]  ROI  point  geom.  n/a 
[HWG14]  ROI  point  both  cluster 
[XMZ14]  ROI  simi.  topo.  cluster 
[FAvK14]  plot  part  geom.  cluster 
[HSS13]  query  simi.  both  hierarchy 
Templatebased exploration.
Componentwise variability in positions and scales of parts reveals useful information about a model collection. Several techniques use boxlike templates to show variations among models of the same class. Ovsjanikov et al. [OLGM11] describe a technique for learning these partwise variations without solving the challenging problem of consistent segmentation. First, they use a segmentation of a single shape to construct the initial template. This is the only step that needs to be verified and potentially fixed by the user. The next goal is to automatically infer deformations of the template that would capture the most important geometric variations of the models the collection. They hypothesize that all shapes can be projected on a lowdimensional manifold based on their global shape descriptors. Finally, they reveal the manifold structure by deforming a template to fit to the sample points. Directions for interesting variations are depicted by arrows on the template and the shapes that correspond to current template configuration are presented to the user.
Descriptorbased approach described above assumes that all shapes share same parts and there exists a lowdimensional manifold that can be captured by deforming a single template. These assumptions do not hold for large and diverse collections of 3D models. To tackle this challenge, Kim et al. [KLM13] proposed an algorithm for learning several partbased templates capturing multimodal variability in collections of shapes. They start with an initial template that includes a superset of all parts that might occur in a dataset, and jointly learn part segmentations, pointtopoint surface correspondence and a compact deformation model. The output is a set of templates that groups the input models into clusters capturing their styles and variations.
ROIbased exploration.
Not all interesting variations occur at the scale of parts: they can occur at subpart scale, or span multiple subregions from multiple parts. In these cases the user may prefer to select an arbitrary region on a 3D model and look for more models sharing similar regions of interest. Such detailed and flexible queries require a finer understanding of correspondences between different shapes. Kim et al. [KLM12] propose fuzzy point correspondences to encode the inherent ambiguity in relating diverse shapes. Fuzzy point correspondences are represented by real values specified for all pair of points, indicating how well the points correspond. They leverage transitivity in correspondence relationships to compute this representation from a sparse set of pairwise point correspondences. The interface proposed by Kim et al. allows painting regions of interest directly on a surface, and the system retrieves similar regions or shows geometric variations in the selected region (see Figure 17).
One limitation of correspondencebased techniques is that they typically do not consider the entire collection when estimating shape differences. Rustamov et al. [ROA13]
focus on a fundamental intrinsic representation for shape differences. Starting with a functional map between two shapes, that is a map that describes change of functional basis, they derive a shape difference operator revealing detailed information about location, type, and magnitude of distortion induced by a map. This makes shape difference a quantifiable object that can be coanalyzed within a context of the entire collection. They show that this deeper understanding of shape differences can help in exploration. For example, one can embed shapes in a lowdimensional space based on shape differences, or use shape difference to interpolate variations by showing “intermediate" shapes between two regions of interest. To extend these technique to manmade objects, Huang et al.
[HWG14] construct consistent functional basis for shape collections that exhibit large geometric and topological variability. They show that resulting consistent maps can capture discrete topological variability, such as variance in number of bars in the back of a chair.ROIbased scene exploration.
Recent works on organizing and exploring 3D visual data mostly focus on object collections. Exploring 3D scenes poses additional challenges since they typically exhibit more variance in structure. Unlike manmade objects that usually contain of a handful of object parts, scene usually includes tens to hundreds of objects, and most objects do not typically have a prescribed rigid arrangement. Thus, global scene similarity metrics, such as a graph kernel based technique by [FRS12] are limited to organizing datasets based on very highlevel features, such as scene type. Xu et al. [XMZ14] advocate that 3D scenes should be compared from a perspective of a particular focal point which is a representative substructure of a specific scene category. Focal points are detected through contextual analysis of a collection of scenes, resulting in a clustering of the scene collection where each cluster is characterized by its representative focal points (see Section 8). Consequently, the focal points extracted from a scene collection can be used to organize collection into an interlinked and wellconnected cluster formation, which facilitates scene exploration. Figure 18 shows an illustration of such clusterbased organization and an exploratory path transiting between two scene clusters/categories.
Plotbased exploration.
All aforementioned exploration techniques typically do not visualize the probabilistic nature of shape variations. Fish et al. [FAvK14] study the configurations of shape parts from a probabilistic perspective, trying to indicate which shape variations are more likely to occur. To learn the distributions of part arrangements, all shapes in the family are presegmented consistently. The resulting set of probabilistic density functions (PDF) characterize the variability of relations and arrangements across different parts. A peak in a PDF curve represents a configuration of the related parts frequently appeared among several shapes in the family. The multiple PDFs can be used as interfaces to interactively explore the shape family from various perspectives. Averkiou et al. [AKZM14], use part structure inferred by this method to produce a lowdimensional partaware embedding of all models. The user can explore interesting variations in part arrangements simply by moving the mouse over the 2D embedding. In addition, their technique allowed to synthesize novel shapes by clicking on empty spaces in the embedded space. At click the system would deform parts from neighboring shapes to synthesize a novel part arrangement.
Querybased exploration.
For a heterogeneous shape collection encompassing diverse object classes, it is typically not possible to capture shape part structure and correspondence. Even global shape similarity is not a very reliable feature, which makes organizing and exploring heterogeneous collections especially difficult. To address this challenge, Huang et al. [HSS13] introduce qualitative analysis from the bioinformatics field. Instead of relying on quantitative distances, which may be unreliable between dissimilar shapes, the method considers more reliable qualitative similarity derived from quartets composed of two pairs of objects. The shapes that are paired in the quartet are close to each other and far from the shapes in the other pair, where distances are estimated from multiple shape descriptors. They aggregate this topological information from many quartets computed across the entire shape collection, and construct a hierarchical categorization tree (see Figure 19). Analogous to the phylogenetic trees of species, the categorization tree of a shape collection provides an overview of the shapes about their mutual distance and hierarchical relations. Based on such organization, they also define the degree of separation chart for every shape in the collection and apply it for interactive shapes exploration.
Attributebased exploration.
An alternative approach is to allow users interactively explore shapes with continuously valued semantic attributes. Blanz and Vetter [BV99] provide an interface to explore faces based on continuous facial attributes, such as “smile” or “frown”, built upon the face parametric model (Section 6). Similarly, Allen et al. [ACP03] allow users explore the range of human bodies with features, such as height, weight, and age. Chaudhuri et al.’s [CKG13] interface enables exploration of shape parts according to learned strengths of semantic attributes (Figure 5).
10 Conclusion
In this survey, we discussed the stateoftheart on datadriven methods for 3D shape analysis and processing. We also presented the main concepts and methodologies used to develop such methods. We hope that this survey will act as a tutorial that will help researchers develop new datadriven algorithms related to shape processing. There are several exciting research directions that have not been sufficiently explored so far in our community that we discuss below:
Joint analysis of 2D and 3D data.
Generating 3D content from images requires building mappings from 2D to 3D space. The problem is largely illposed, however, with the help of the vast amount of 2D images available on the web, effective priors can be developed to map 2D visual elements or features to 3D shape and scene representations. Initial attempts to build alignments between 2D and 3D data are the recent works by Su et al [SHM14] and Aubry et al.[AME14], which can further inspire more work on this topic. Another possibility is to jointly analyze shape and texture data. The work of cosegmenting textured 3D shapes by Yumer et al. [YCM14] is one such example. Following this line, it would be interesting to jointly analyze and process multimodal visual data, including depth scans and videos. The key challenges is how to integrate the heterogeneous information in a unified learning framework.
Better and scalable shape analysis techniques.
Many datadriven applications rely on highquality shape analysis results, particularly in segmentations and correspondences. We believe it is important to further advance the research in both directions. This includes designing shape analysis techniques for specific data and/or making them scalable to gigantic datasets.
From geometry to semantics and vice versa.
Several datadriven methods have tried to map 2D and 3D geometric data to highlevel concepts, such as shape categories, semantic attributes, or part labels. Existing methods deal with cases where only a handful of different entities are predicted for input shapes or scenes. Scaling these methods to handle thousands or more categories, part labels and other such entities, as well as approaching human performance is an open problem. The opposite direction is also interesting and insufficiently explored: generating or editing shapes and scenes based on highlevel specifications, such as shape styles, attributes, or even natural language, potentially combined with other input, such as sketches and interactive handles. WordsEye [CS01] was an early attempt to bridge this gap, yet requires largely manual mappings. The more recent work by [CKG13] handles only shape part replacements driven by linguistic attributes.
Understanding function from geometry.
The geometry of a shape is strongly related to its functionality including its relationship to human activity. Thus, analyzing shapes and scenes requires some understanding of their function. The recent work by Laga et al. [LMS13] and Kim et al. [KCGF14] are important examples of datadriven approaches that take into account functional aspects in shape analysis. In addition, datadriven methods can guide the synthesis of shapes that can be manufactured or 3D printed based on given functional specifications; an example of such attempt is the work by Schulz et al [SSL14].
Datadriven shape abstractions.
It is relatively easy for humans to communicate the essence of shapes with a few lines, sketches, and abstract forms. Developing methods that can build such abstractions automatically has significant applications in shape and scene visualization, artistic rendering, and shape analysis. There are a few datadriven approaches to line drawing [CGL08, KNS09, KNBH12], saliency analysis [CSPF12], surface abstraction [YK12], and viewpoint preferences [SLF11] related to this goal. Matching the human performance in these tasks is still a largely open question, while synthesizing and editing shapes using shape abstractions as input remains a significant challenge.
Feature learning.
Several shape and scene processing tasks depend on designing geometric descriptors for points and shapes, as we show in Section 3. In general, it seems that some descriptors work well in some specific classes, but fail in several others. A main issue is that there are no geometric features that can serve as reliable mid or highlevel representations of shapes. Recent work in computer vision shows that features can be learned from data in the case of 2D and 3D images [YN10, LBF13], thus a promising direction is to extend this work for learning feature representations from raw 3D geometric data.
Work  Training data  Feature  Learning model/approach  Learning type  Learning outcome  Application  

Rep.  Preproc.  Scale  Type  Sel.  
[FKMS05]  Point  No  Thousands  Local  No  SVM classifier  Supervised  Object classifier  Classification 
[BBOG11]  Mesh  No  Thousands  Local  No  Similarity Sensitive Hashing  Supervised  Distance metric  Classification 
[HSG13]  Mesh  Prealign.  Thousands  Local  No  Maxmarginal distance learning  Semisupervised  Distance metric  Classification 
[KHS10]  Mesh  No  Tens  Local  Yes  Jointboost classifier  Supervised  Face classifier  Segmentation 
[vKTS11]  Mesh  Yes  Tens  Local  Yes  Gentleboost classifier  Supervised  Face classifier  Segmentation 
[BLVD11]  Mesh  No  Tens  L.&G.  Yes  Adaboost classifier  Supervised  Boundary classifier  Segmentation 
[XXLX14]  Mesh  No  Hundreds  Local  Yes  Feedforward neural networks  Supervised  Face/patch classifier  Segmentation 
[XSX14]  Mesh  Preseg.  Tens  Local  No  Sparse model selection  Supervised  Segment similarity  Segmentation 
[LCHB12]  Mesh  No  Tens  Local  Yes  Entropy regularization  Semisupervised  Face classifier  Segmentation 
[WAvK12]  Mesh  Preseg.  Hundreds  Local  No  Active learning  Semisupervised  Segment classifier  Segmentation 
[WGW13]  Image  Labeled parts  Hundreds  Local  No  2D shape matching  Supervised  2D shape similarity  Segmentation 
[HFL12]  Mesh  Overseg.  Tens  Local  Yes  Subspace clustering  Unsupervised  Patch similarity  Seg. / Corr. 
[SvKK11]  Mesh  Preseg.  Tens  Local  No  Spectral clustering  Unsupervised  Seg. simi./classifier  Seg. / Corr. 
[XLZ10]  Mesh  Part  Tens  Struct.  No  Spectral clustering  Unsupervised  Part proportion simi.  Seg. / Corr. 
[vKXZ13]  Mesh  Part  Tens  Struct.  No  Multiinstance clustering  Unsupervised  Seg. hier. simi.  Seg. / Corr. 
[GF09]  Mesh  No  Tens  Global  No  Global shape alignment  Unsupervised  Face similarity  Seg. / Corr. 
[HKG11]  Mesh  Preseg.  Tens  Local  No  Joint part matching  Unsupervised  Segment similarity  Seg. / Corr. 
[HWG14]  Mesh  Init. corr.  Tens  Global  No  Consistent func. map networks  Unsupervised  Segment similarity  Seg. / Corr. 
[KLM13]  Mesh  Template  Thousands  Local  No  Shape alignment  Semisupervised  Templates  Seg. / Corr. 
[MPM14]  Mesh  Overseg.  Hundreds  Local  No  Densitybased clustering  Unsupervised  Patch similarity  Recognition 
[NBCW11]  Mesh  Init. corr.  Tens  L.&G.  No  Inconsistent map detection  Unsupervised  Point similarity  Corr. / Expl. 
[HZG12]  Mesh  Init. corr.  Tens  L.&G.  No  MRF joint matching  Unsupervised  Point similarity  Corr. / Expl. 
[KLM12]  Mesh  Prealign.  Tens  Global  No  Spectral matrix recovery  Unsupervised  Point similarity  Corr. / Expl. 
[HG13]  Mesh  Init. corr.  Tens  Global  No  Lowrank matrix recovery  Unsupervised  Point similarity  Corr. / Expl. 
[OLGM11]  Mesh  Part  Hundreds  Global  No  Manifold learning  Unsupervised  Parametric model  Exploration 
[ROA13]  Mesh  Map  Tens  None  N/A  Functional map analysis  Unsupervised  Difference operator  Exploration 
[FAvK14]  Mesh  Labeled parts  Hundreds  Struct.  No  Kernel Density Estimation  Supervised  Prob. distributions  Expl. / Synth. 
[AKZM14]  Mesh  [KLM13]  Thousands  Struct.  No  Manifold learning  Unsupervised  Parametric models  Expl. / Synth. 
[HSS13]  Mesh  No  Hundreds  Global  No  Quartet analysis and clustering  Unsupervised  Distance measure  Organization 
[BV99]  Mesh  Prealign.  Hundreds  Local  No  Principal Component Analysis  Unsupervised  Parametric model  Recon. / Expl. 
[ACP03]  Point  Prealign.  Hundreds  Local  No  Principal Component Analysis  Unsupervised  Parametric model  Recon. / Expl. 
[HSS09]  Point  Prealign.  Hundreds  Local  No  PCA & linear regression 
Unsupervised  Parametric model  Recon. / Expl. 
[PMG05]  Mesh  Prealign.  Hundreds  Global  No  Global shape alignment  Unsupervised  Shape similarity  Reconstruction 
[NXS12]  Point  Labeled parts  Hundreds  Struct.  No  Random Forest Classifier  Supervised  Object classifier  Reconstruction 
[SFCH12]  Mesh  Labeled parts  Tens  Global  No  Part matching  Unsupervised  Part detector  Reconstruction 
[KMYG12]  Point  Labeled parts  Tens  Local  No  Joint part fitting and matching  Unsupervised  Object detector  Reconstruction 
[SMNS13]  Mesh  No  Tens  L.&G.  No  Shape matching  Unsupervised  Object detector  Reconstruction 
[XZZ11]  Mesh  Labeled parts  Tens  Struct.  No  Structural shape matching  Unsupervised  Part detector  Modeling 
[AME14]  Mesh  Projected  Thousands  Visual  No  Linear Discriminant Analysis  Supervised  Object detector  Recognition 
[SHM14]  Mesh  Projected  Tens  Visual  No  Shape matching  Unsupervised  2D3D correlation  Reconstruction 
[CK10b]  Mesh  No  Thousands  Global  No  Shape matching  Unsupervised  Part detector  Modeling 
[CKGK11]  Mesh  [KHS10]  Hundreds  Local  No  Bayesian Network  Unsupervised  Part reasoning model  Modeling 
[XXM13]  Mesh  Labeled parts  Tens  Struct.  No  Contextual part matching  Unsupervised  Part detector  Modeling 
[KCKK12]  Mesh  [KHS10]  Hundreds  L.&G.  No  Bayesian Network  Unsupervised  Shape reasoning model  Synthesis 
[XZCOC12]  Mesh  Part  Tens  Struct.  No  Part matching  Unsupervised  Part similarity  Synthesis 
[TYK12]  Mesh  Labeled parts  Tens  Struct.  No  Structured concept learning  Unsupervised  Probabilistic grammar  Synthesis 
[YK12]  Mesh  No  Tens  Global  No  Shape matching  Unsupervised  Shape abs. similarity  Modeling 
[YK14]  Mesh  Preseg.  Tens  Local  No  Segment matching  Unsupervised  Segment abs. simi.  Modeling 
[CKG13]  Mesh  [KHS10]  Hundreds  L.&G.  No  SVM ranking  Supervised  Ranking metric  Model. / Expl. 
[FSH11]  Scene  Labeled obj.  Tens  Struct.  No  Relevance feedback  Supervised  Contextual obj. simi.  Classification 
[FRS12]  Scene  Labeled obj.  Hundreds  Struct.  No  Bayesian Network  Supervised  Mixture models  Synthesis 
[XCF13]  Scene  Labeled obj.  Hundreds  Struct.  No  Frequent subgraph mining  Unsupervised  Frequent obj. groups  Modeling 
[XMZ14]  Scene  Labeled obj.  Hundreds  Struct.  No  Weighted subgraph mining  Unsupervised  Distinct obj. groups  Org. / Expl. 
[LCK14]  Scene  Labeled hier.  Tens  Struct.  No  Probabilistic learning  Supervised  Probabilistic grammar  Seg. / Corr. 
Comparison of various works on datadriven shape analysis and processing. For each work, we summarize over the criterion set defined for datadriven methods: training data (encompassing data representation, preprocessing and scale), feature (including feature type and whether feature selection is involved), learning model or approach, learning type (supervised, semisupervised, and unsupervised), learning outcome (e.g., a classifier or a distance metric), as well as its typical application scenario. See the text for detailed explanation of the criteria. Some works employ another work as a preprocessing stage (e.g.,
[CKG13] requires the labeled segmentation produced by [KHS10]). There are four types of features including local geometric features (Local), global shape descriptors (Global), both local and global shape features (L.&G.), structural features (Struct.) as well as 2D visual features (Visual).Biographical sketches
Kai Xu
received his PhD in Computer Science at National University of Defense Technology (NUDT). He is currently a postdoctoral researcher at Shenzhen Institutes of Advanced Technology and also holds a faculty position at NUDT. During 2009 and 2010, he visited Simon Fraser University, supported by the Chinese government. His research interests include geometry processing and geometric modeling, especially on methods that utilize large collections of 3D shapes. He served on program committees for SGP, PG and GMP.
Vladimir G. Kim
received his PhD in the Computer Science Department at Princeton University and is currently a postdoctoral scholar at Stanford University. His research interests include geometry processing and analysis of shapes and collections of 3D models. He received his B.A. degree in Mathematics and Computer Science from Simon Fraser University in 2008. Vladimir is a recipient of the Siebel Scholarship and the NSERC Postgraduate Scholarship. He was also on the International Program Committee for SGP 2013 and SGP 2014.
Qixing Huang
is a research assistant professor at TTI Chicago. He earned his PhD from the Department of Computer Science at Stanford University in 2012. He obtained both MS and BS degrees in Computer Science from Tsinghua University in 2005 and 2002, respectively. His research interests include datadriven geometry processing and coanalysis of shapes and collections of 3D models using convex optimization techniques. He was a winner of the Best Paper Award from SGP 2013 and the Most Cited Paper Award for the journal ComputerAided Geometric Design in 2011 and 2012. He served on program committees for SGP, PG and GMP.
Evangelos Kalogerakis
is an assistant professor in computer science at the University of Massachusetts Amherst. His research deals with automated analysis and synthesis of 3D visual content, with particular emphasis on machine learning techniques that learn to perform these tasks by combining data, probabilistic models, and prior knowledge. He obtained his PhD from the University of Toronto in 2010 and BEng from the Technical University of Crete in 2005. He was a postdoctoral researcher at Stanford University from 2010 to 2012. He served on program committees for EG 2014 and 2015, SGP 2012, 2014 and 2015. His research is supported by NSF (CHS1422441).
References
 [ACP03] Allen B., Curless B., Popović Z.: The space of human body shapes: Reconstruction and parameterization from range scans. ACM Trans. Graph. 22, 3 (2003).
 [AKKS99] Ankerst M., Kastenmüller G., Kriegel H.P., Seidl T.: 3D shape histograms for similarity search and classification in spatial databases. In SSD’99 (1999), Springer, pp. 207–226.
 [AKZM14] Averkiou M., Kim V. G., Zheng Y., Mitra N. J.: ShapeSynth: Parameterizing Model Collections for Coupled Shape Exploration and Synthesis. Computer Graphics Forum 33, 2 (2014).
 [AME14] Aubry M., Maturana D., Efros A. A., Russell B. C., Sivic J.: Seeing 3d chairs: Exemplar partbased 2d3d alignment using a large dataset of cad models. In Proc. CVPR (2014).
 [ASK05] Anguelov D., Srinivasan P., Koller D., Thrun S., Rodgers J., Davis J.: Scape: shape completion and animation of people. In Proc. of SIGGRAPH (2005), pp. 408–416.
 [ATC05] Anguelov D., Taskar B., Chatalbashev V., Koller D., Gupta D., Heitz G., Ng A.: Discriminative learning of markov random fields for segmentation of 3D scan data. In CVPR (2005).
 [BB01] Banko M., Brill E.: Mitigating the paucityofdata problem: exploring the effect of training corpus size on classifier performance for natural language processing. In Proc. Int. Conf. on Human Lang. Tech. Research (2001), pp. 1–5.
 [BBOG11] Bronstein A. M., Bronstein M. M., Ovsjanikov M., Guibas L. J.: Shape google: geometric words and expressions for invariant shape retrieval. ACM Trans. Graphics 30, 1 (January 2011), 1–20.
 [BD06] Barutcuoglu Z., DeCoro C.: Hierarchical shape classification using bayesian aggregation. Shape Modeling International (June 2006).
 [Ben09] Bengio Y.: Learning deep architectures for ai. Foundations and Trends in Machine Learning 2, 1 (2009), 1–127.
 [BLVD11] Benhabiles H., Lavoué G., Vandeborre J.P., Daoudi M.: Learning boundary edges for 3dmesh segmentation. Computer Graphics Forum 30, 8 (2011).
 [BMP02] Belongie S., Malik J., Puzicha J.: Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24, 4 (2002).
 [BRF14] Bo L., Ren X., Fox D.: Learning hierarchical sparse features for RGB(D) object recognition. International Journal of Robotics Research (2014), to appear.
 [BSWR12] Blum M., Springenberg J. T., Wulfing J., Riedmiller M.: A learned feature descriptor for object recognition in RGBD data. In Proc. IEEE Int. Conf. on Rob. and Auto. (2012), pp. 1298–1303.
 [BV99] Blanz V., Vetter T.: A morphable model for the synthesis of 3D faces. In Proc. of SIGGRAPH (1999), pp. 187–194.
 [CAF10a] Cho T. S., Avidan S., Freeman W. T.: The patch transform. IEEE Trans. Pattern Anal. Mach. Intell. 32, 8 (2010), 1489–1501.
 [CAF10b] Cho T. S., Avidan S., Freeman W. T.: A probabilistic image jigsaw puzzle solver. In CVPR (2010), pp. 183–190.
 [CAK12] Campen M., Attene M., Kobbelt L.: A Practical Guide to Polygon Mesh Repairing. In Eurographics tutorials (2012).
 [CCPS13] Choi W., Chao Y.W., Pantofaru C., Savarese S.: Understanding indoor scenes using 3d geometric phrases. In CVPR (2013).
 [CGF09] Chen X., Golovinskiy A., Funkhouser T.: A benchmark for 3D mesh segmentation. ACM Trans. Graph. 28, 3 (2009), 73:1–73:12.
 [CGH14] Chen Y., Guibas L. J., Huang Q.X.: Nearoptimal joint object matching via convex relaxation. CoRR abs/1402.1473 (2014).
 [CGL08] Cole F., Golovinskiy A., Limpaecher A., Barros H. S., Finkelstein A., Funkhouser T., Rusinkiewicz S.: Where do people draw lines? ACM Trans. Graph. (2008).
 [CHZ14] Cao C., Hou Q., Zhou K.: Displaced dynamic expression regression for realtime facial tracking and animation. ACM Trans. Graph. 33, 4 (2014), 43:1–43:10.
 [CK10a] Chaudhuri S., Koltun V.: Datadriven suggestions for creativity support in 3d modeling. ACM Trans. Graph. 29, 6 (2010).
 [CK10b] Chaudhuri S., Koltun V.: Datadriven suggestions for creativity support in 3d modeling. ACM Trans. Graph. 29, 6 (Dec. 2010), 183:1–183:10.
 [CKG13] Chaudhuri S., Kalogerakis E., Giguere S., , Funkhouser T.: AttribIt: Content creation with semantic attributes. UIST (Oct. 2013).
 [CKGK11] Chaudhuri S., Kalogerakis E., Guibas L., Koltun V.: Probabilistic reasoning for assemblybased 3d modeling. ACM Trans. Graph. 30, 4 (2011), 35:1–35:10.
 [CLM12] Chang W., Li H., Mitra N., Pauly M., Wand M.: Dynamic Geometry Processing. In Eurographics tutorials (2012).
 [COSH11] Crandall D., Owens A., Snavely N., Huttenlocher D.: Discretecontinuous optimization for largescale structure from motion. CVPR ’11, pp. 3001–3008.
 [CS01] Coyne B., Sproat R.: Wordseye: An automatic texttoscene conversion system. In Proc. of SIGGRAPH (2001).
 [CSPF12] Chen X., Saparov A., Pang B., Funkhouser T.: Schelling points on 3D surface meshes. ACM Trans. Graph. (2012).

[CTSO03]
Chen D.Y., Tian X.P., Shen Y.T., Ouhyoung M.:
On visual similarity based 3D model retrieval.
Computer Graphics Forum 22, 3 (2003), 223–232.  [CWLZ13] Cao C., Weng Y., Lin S., Zhou K.: 3d shape regression for realtime facial animation. ACM Trans. Graph. 32, 4 (2013), 41:1–41:10.
 [CWZ14] Chen C., Weng Y., Zhou S., Tong Y., Zhou K.: Facewarehouse: a 3d facial expression database for visual computing. IEEE Trans. Vis. & Comp. Graphics 20, 3 (2014).
 [DTB06] Diebel J., Thrun S., Bruening M.: A bayesian method for probable surface reconstruction and decimation. ACM Transactions on Graphics 25, 1 (2006).
 [FAvK14] Fish N., Averkiou M., van Kaick O., SorkineHornung O., CohenOr D., Mitra N. J.: Metarepresentation of shape families. ACM Trans. Graph. 33, 4 (2014), 34:1–34:11.
 [FFFP06] FeiFei L., Fergus R., Perona P.: Oneshot learning of object categories. IEEE Trans. Pat. Ana. & Mach. Int. 28, 4 (April 2006).
 [FGG13] Fossati A., Gall J., Grabner H., Ren X., Konolige K.: Consumer Depth Cameras for Computer Vision, Chapter 12. Springer, 2013.
 [FGH13] Fouhey D. F., Gupta A., Hebert M.: Datadriven 3d primitives for single image understanding. In Proc. ICCV (2013).
 [FH10] Fisher M., Hanrahan P.: Contextbased search for 3d models. ACM Trans. Graph. 29, 6 (Dec. 2010), 182:1–182:10.
 [FHK04] Frome A., Huber D., Kolluri R., Bulow T., Malik J.: Recognizing objects in range data using regional point descriptors. In Proc. ECCV. 2004, pp. 224–237.
 [FKMS05] Funkhouser T. A., Kazhdan M. M., Min P., Shilane P.: Shapebased retrieval and analysis of 3d models. Commun. ACM 48, 6 (2005), 58–64.
 [FKS04] Funkhouser T., Kazhdan M., Shilane P., Min P., Kiefer W., Tal A., Rusinkiewicz S., Dobkin D.: Modeling by example. ACM Trans. Graph. 23, 3 (Aug. 2004), 652–663.
 [FRS12] Fisher M., Ritchie D., Savva M., Funkhouser T., Hanrahan P.: Examplebased synthesis of 3d object arrangements. ACM Trans. Graph. 31, 6 (2012), 135:1–135:12.
 [FSH11] Fisher M., Savva M., Hanrahan P.: Characterizing structural relationships in scenes using graph kernels. ACM Trans. Graph. 30, 4 (2011), 34:1–34:12.
 [FWX13] Fan L., Wang R., Xu L., Deng J., Liu L.: Modeling by drawing with shadow guidance. 157—166.
 [GF09] Golovinskiy A., Funkhouser T. A.: Consistent segmentation of 3d models. Computers & Graphics 33, 3 (2009), 262–269.
 [GKF09] Golovinskiy A., Kim V. G., Funkhouser T.: Shapebased Recognition of 3D Point Clouds in Urban Environments. ICCV (2009).
 [GLXJ14] Guo X., Lin J., Xu K., Jin X.: Creature grammar for creative modeling of 3d monsters. Graphical Models (Special Issue of GMP) 76, 5 (2014), 376–389.
 [GMB04] Goldberg D., Malon C., Bern M.: A global approach to automatic solution of jigsaw puzzles. Comput. Geom. Theory Appl. 28 (2004), 165–174.
 [GSH07] Gal R., Shamir A., Hassner T., Pauly M., CohenOr D.: Surface reconstruction using local shape priors. In Symp. on Geom. Proc. (2007), SGP ’07, pp. 253–262.
 [GSMCO09] Gal R., Sorkine O., Mitra N. J., CohenOr D.: iwires: an analyzeandedit approach to shape manipulation. ACM Trans. Graph. (2009), 33:1–33:10.
 [HCG14] Huang Q., Chen Y., Guibas L. J.: Scalable semidefinite relaxation for maximum a posterior estimation, 2014.
 [HFG06] Huang Q., Flöry S., Gelfand N., Hofer M., Pottmann H.: Reassembling fractured objects by geometric matching. ACM Trans. Graph. 25, 3 (2006), 569–578.
 [HFL12] Hu R., Fan L., Liu L.: Cosegmentation of 3d shapes via subspace clustering. Comp. Graph. Forum 31, 5 (2012).
 [HG13] Huang Q., Guibas L.: Consistent shape maps via semidefinite programming. Computer Graphics Forum (SGP) 32, 5 (2013), 177–186.
 [HKG11] Huang Q., Koltun V., Guibas L.: Joint shape segmentation using linear programming. ACM Trans. Graph. 30, 6 (2011), 125:1–125:12.
 [Hor84] Horn B. K. P.: Extended Gaussian Images. Proceedings of the IEEE 72, 12 (Dec 1984), 1671–1686.
 [HSG13] Huang Q., Su H., Guibas L.: Finegrained semisupervised labeling of large shape collections. ACM Trans. Graph. 32, 6 (2013), 190:1–190:10.
 [HSR09] Hasler N., Stoll C., Rosenhahn B., Thormählen T., Seidel H.P.: Technical section: Estimating body shape of dressed humans. Comput. Graph. 33, 3 (2009), 211–216.
 [HSS09] Hasler N., Stoll C., Sunkel M., Rosenhahn B., Seidel H.P.: A statistical model of human pose and body shape. Comput. Graph. Forum 28, 2 (2009), 337–346.
 [HSS13] Huang S.S., Shamir A., Shen C.H., Zhang H., Sheffer A., Hu S.M., CohenOr D.: Qualitative organization of collections of shapes via quartet analysis. ACM Trans. Graph. 32, 4 (2013), 71:1–71:10.
 [Hub02] Huber D.: Automatic Threedimensional Modeling from Reality. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 2002.
 [HWG14] Huang Q., Wang F., Guibas L.: Functional map networks for analyzing and exploring large shape collections. ACM Trans. Graph. 33, 4 (2014).
 [HZG12] Huang Q., Zhang G.X., Gao L., Hu S.M., Butscher A., Guibas L.: An optimization approach for extracting and encoding consistent maps in a shape collection. ACM Trans. Graph. 31, 6 (2012), 167:1–167:11.
 [JH99] Johnson A. E., Hebert M.: Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21, 5 (May 1999), 433–449.
 [JWL06] Jiao F., Wang S., Lee C.H., Greiner R., Schuurmans D.: Semisupervised conditional random fields for improved sequence segmentation and labeling. In 21st International Conference on Computational Linguistics (2006).
 [KBLB12] Kokkinos I., Bronstein M., Litman R., Bronstein A.: Intrinsic shape context descriptors for deformable shapes. In Proc. CVPR (2012).
 [KCGF14] Kim V. G., Chaudhuri S., Guibas L., Funkhouser T.: Shape2Pose: HumanCentric Shape Analysis. ACM Trans. Graph. 33, 4 (2014).
 [KCKK12] Kalogerakis E., Chaudhuri S., Koller D., Koltun V.: A probabilistic model for componentbased shape synthesis. ACM Trans. Graph. 31, 4 (2012).
 [KF09] Koller D., Friedman N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
 [KFR03] Kazhdan M., Funkhouser T., Rusinkiewicz S.: Rotation invariant spherical harmonic representation of 3d shape descriptors. Symp. on Geom. Proc. (June 2003).
 [KFR04] Kazhdan M., Funkhouser T., Rusinkiewicz S.: Symmetry descriptors and 3d shape matching. Symp. on Geom. Proc. (2004).
 [KHS10] Kalogerakis E., Hertzmann A., Singh K.: Learning 3d mesh segmentation and labeling. ACM Trans. Graph. 29 (2010), 102:1–102:12.
 [KJS07] Kreavoy V., Julius D., Sheffer A.: Model composition from interchangeable components. In Proc. of Pacific Graphics (2007), IEEE Computer Society, pp. 129–138.
 [KLF11] Kim V. G., Lipman Y., Funkhouser T.: Blended intrinsic maps. ACM Trans. Graph. 30, 4 (2011), 79:1–79:12.
 [KLM12] Kim V. G., Li W., Mitra N. J., DiVerdi S., Funkhouser T.: Exploring collections of 3D models using fuzzy correspondences. ACM Trans. Graph. 31, 4 (2012), 54:1–54:11.
 [KLM13] Kim V. G., Li W., Mitra N. J., Chaudhuri S., DiVerdi S., Funkhouser T.: Learning partbased templates from large collections of 3D shapes. ACM Trans. Graph. 32, 4 (2013), 70:1–70:12.
 [KMYG12] Kim Y. M., Mitra N. J., Yan D.M., Guibas L.: Acquiring 3d indoor environments with variability and repetition. ACM Trans. Graph. 31, 6 (Nov. 2012), 138:1–138:11.
 [KNBH12] Kalogerakis E., Nowrouzezahrai D., Breslav S., Hertzmann A.: Learning Hatching for PenandInk Illustration of Surfaces. ACM Trans. Graph. 31, 1 (2012).
 [KNS09] Kalogerakis E., Nowrouzezahrai D., Simari P., McCrae J., Hertzmann A., Singh K.: Datadriven curvature for realtime line drawing of dynamic scenes. ACM Trans. Graph. 28, 1 (2009), 1–13.
 [KSES14] Kholgade N., Simon T., Efros A., Sheikh Y.: 3d object manipulation in a single photograph using stock 3d models. ACM Trans. Graph. 33, 4 (2014), 127:1–127:12.
 [KSNS07] Kalogerakis E., Simari P., Nowrouzezahrai D., Singh K.: Robust statistical estimation of curvature on discretized surfaces. In Symp. on Geom. Proc. (2007).
 [KT03] Katz S., Tal A.: Hierarchical mesh decomposition using fuzzy clustering and cuts. SIGGRAPH ’03, pp. 954–961.
 [KWT88] Kass M., Witkin A., Terzopoulos D.: Snakes: Active contour models. INTERNATIONAL JOURNAL OF COMPUTER VISION 1, 4 (1988), 321–331.
 [LBBC14] Litman R., Bronstein A. M., Bronstein M. M., Castellani U.: Supervised learning of bagoffeatures shape descriptors using sparse coding. SGP (2014).
 [LBF13] Lai K., Bo L., Fox D.: Unsupervised feature learning for 3d scene labeling. In Proc. IEEE Int. Conf. on Rob. and Auto. (2013).
 [LCHB12] Lv J., Chen X., Huang J., Bao H.: Semisupervised mesh segmentation and labeling. Comp. Graph. Forum 31, 72 (2012).
 [LCK14] Liu T., Chaudhuri S., Kim V., Huang Q.X., Mitra N. J., Funkhouser T.: Creating consistent scene graphs using a probabilistic grammar. ACM Trans. Graph. 33, 6 (2014), to appear.
 [LF09] Lipman Y., Funkhouser T.: Mobius voting for surface correspondence. ACM Trans. Graph. 28, 3 (2009).
 [LH05] Leordeanu M., Hebert M.: A spectral technique for correspondence problems using pairwise constraints. ICCV ’05, pp. 1482–1489.
 [LMB14] Loper M. M., Mahmood N., Black M. J.: MoSh: Motion and shape capture from sparse markers. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia) 33, 6 (Nov. 2014), 220:1–220:13.
 [LMP01] Lafferty J. D., McCallum A., Pereira F. C. N.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. pp. 282–289.
 [LMS13] Laga H., Mortara M., Spagnuolo M.: Geometry and context for semantic correspondences and functionality recognition in manmade 3d shapes. ACM Trans. Graph. 32, 5 (2013).
 [LWP10] Li H., Weise T., Pauly M.: Examplebased facial rigging. ACM Trans. Graph. 29, 4 (2010), 32:1–32:6.
 [LYYB13] Li H., Yu J., Ye Y., Bregler C.: Realtime facial animation with onthefly correctives. ACM Trans. Graph. 32, 4 (2013), 42:1–42:10.
 [LZC11] Lee Y. J., Zitnick L., Cohen M.: Shadowdraw: Realtime user guidance for freehand drawing. ACM Trans. Graph. 30, 4 (2011), 27:1–27:9.
 [Mer07] Merrell P.: Examplebased model synthesis. In Proc. I3D (2007), pp. 105–112.
 [MHS14] Ma C., Huang H., Sheffer A., Kalogerakis E., Wang R.: Analogydriven 3d style transfer. Computer Graphics Forum 33, 2 (2014), 175–184.
 [MPM14] Mattausch O., Panozzo D., Mura C., SorkineHornung O., Pajarola R.: Object detection and classification from largescale cluttered indoor scans. Computer Graphics Forum 33, 2 (2014).
 [MRS08] Manning C. D., Raghavan P., Schütze H.: Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA, 2008.
 [MWH06] Müller P., Wonka P., Haegler S., Ulmer A., Van Gool L.: Procedural modeling of buildings. ACM Trans. Graph. (2006), 614–623.
 [MWZ14] Mitra N., Wand M., Zhang H., CohenOr D., Kim V., Huang Q.X.: Structureaware shape processing. SIGGRAPH Course (2014).
 [NBCW11] Nguyen A., BenChen M., Welnicka K., Ye Y., Guibas L.: An optimization approach to improving collections of shape maps. Computer Graphics Forum 30, 5 (2011), 1481–1491.
 [NK03] Novotni M., Klein R.: 3d zernike descriptors for content based shape retrieval. solid modeling (2003).
 [NXS12] Nan L., Xie K., Sharf A.: A searchclassify approach for cluttered indoor scene understanding. ACM Trans. Graph. 31, 6 (Nov. 2012), 137:1–137:10.
 [OBCS12] Ovsjanikov M., BenChen M., Solomon J., Butscher A., Guibas L.: Functional maps: A flexible representation of maps between shapes. ACM Trans. Graph. 31, 4 (2012), 30:1–30:11.
 [OFCD02] Osada R., Funkhouser T., Chazelle B., Dobkin D.: Shape distributions. ACM Trans. Graph. 21 (October 2002), 807–832.
 [OLGM11] Ovsjanikov M., Li W., Guibas L., Mitra N. J.: Exploration of continuous variability in collections of 3d shapes. ACM Trans. Graph. 30, 4 (2011), 33:1–33:10.
 [OMMG10] Ovsjanikov M., Mérigot Q., Mémoli F., Guibas L. J.: One point isometric matching with the heat kernel. Computer Graphics Forum 29, 5 (2010), 1555–1564.
 [PEP11] Paulovich F., Eler D., Poco J., Botha C., Minghim R., Nonato L.: Piecewise laplacianbased projection for interactive data exploration and organization. Computer Graphics Forum 30, 3 (2011), 1091–1100.
 [PKS13] Pachauri D., Kondor R., Singh V.: Solving the multiway matching problem by permutation synchronization. In Proc. NIPS (2013), pp. 1860–1868.
 [PMG05] Pauly M., Mitra N. J., Giesen J., Gross M., Guibas L. J.: Examplebased 3d scan completion. In Symp. on Geom. Proc. (2005), SGP ’05.
 [QT09] Quattoni A., Torralba A.: Recognizing indoor scenes. In Proc. CVPR (2009), pp. 413–420.
 [ROA13] Rustamov R. M., Ovsjanikov M., Azencot O., BenChen M., Chazal F., Guibas L.: Mapbased exploration of intrinsic shape differences and variability. ACM Trans. Graph. 32, 4 (2013), 72:1–72:12.
 [RR13] Ren X., Ramanan D.: Histograms of sparse codes for object detection. In Proc. CVPR (2013), pp. 3246–3253.
 [RSSS11] Roberts R., Sinha S. N., Szeliski R., Steedly D.: Structure from motion for scenes with large duplicate structures. In CVPR (2011), pp. 3137–3144.
 [SFC11] Shotton J., Fitzgibbon A., Cook M., Sharp T., Finocchio M., Moore R., Kipman A., Blake A.: Realtime human pose recognition in parts from single depth images. In Proc. CVPR (2011).
 [SFCH12] Shen C.H., Fu H., Chen K., Hu S.M.: Structure recovery by part assembly. ACM Trans. Graph. 31, 6 (Nov. 2012), 180:1–180:11.
 [Sha08] Shamir A.: A survey on mesh segmentation techniques. Comput. Graph. Forum 27, 6 (2008), 1539–1556.
 [SHB12] Socher R., Huval B., Bhat B., Manning C. D., Ng A. Y.: Convolutionalrecursive deep learning for 3d object classification. In Proc. NIPS (2012), pp. 665–673.
 [SHM14] Su H., Huang Q.X., Mitra N. J., Li Y., Guibas L. J.: Estimating image depth using shape collections. ACM Trans. Graph. 33, 4 (2014), 37:1–37:11.
 [SKHF12] Silberman N., Kohli P., Hoiem D., Fergus R.: Indoor segmentation and support inference from rgbd images. In ECCV (2012).
 [SLF11] Secord A., Lu J., Finkelstein A., Singh M., Nealen A.: Perceptual models of viewpoint preference. ACM Trans. Graph. 30, 5 (2011).
 [SLH12] Satkin S., Lin J., Hebert M.: Datadriven scene understanding from 3d models. In BMVC (2012), pp. 128:1–128:11.
 [SM05] Sutton C., Mccallum A.: Piecewise training of undirected models. In In Proc. of UAI (2005).
 [SMKF04] Shilane P., Min P., Kazhdan M., Funkhouser T.: The Princeton shape benchmark. In Shape Modeling International (2004).
 [SMNS13] SalasMoreno R. F., Newcombe R. A., Strasdat H., Kelly P. H. J., Davison A. J.: SLAM++: simultaneous localisation and mapping at the level of objects. In Proc. CVPR (2013), IEEE, pp. 1352–1359.
 [SSCO08] Shapira L., Shamir A., CohenOr D.: Consistent mesh partitioning and skeletonisation using the shape diameter function. Vis. Comput. 24 (2008), 249–259.
 [SSL14] Schulz A., Shamir A., Levin D. I. W., Sitthiamorn P., Matusik W.: Design and fabrication by example. ACM Trans. Graph. 33, 4 (2014).
 [SSSss] Shapira L., Shalom S., Shamir A., Zhang R. H., CohenOr D.: Contextual Part Analogies in 3D Objects. International Journal of Computer Vision (In Press).

[SV01]
Saupe D., Vranic D. V.:
3d model retrieval with spherical harmonics and moments.
In
Proceedings of the 23rd DAGMSymposium on Pattern Recognition
(2001), pp. 392–397.  [SvKK11] Sidi O., van Kaick O., Kleiman Y., Zhang H., CohenOr D.: Unsupervised cosegmentation of a set of shapes via descriptorspace spectral clustering. ACM Trans. Graph. 30, 6 (2011), 126:1–126:10.
 [SW10] Swadzba A., Wachsmuth S.: Indoor scene classification using combined 3D and gist features. In Proc. ACCV (2010), pp. 201–215.
 [SW11] Singer A., Wu H.T.: Vector diffusion maps and the connection laplacian, 2011.
 [SX14] Song S., Xiao J.: Sliding shapes for 3D object detection in depth images. In Proc. ECCV (2014).
 [SY07] Schaefer S., Yuksel C.: Examplebased skeleton extraction. In Symp. on Geom. Proc. (2007), pp. 153–162.
 [TFF08] Torralba A., Fergus R., Freeman W. T.: 80 million tiny images: a large database for nonparametric object and scene recognition. IEEE Trans. Pat. Ana. & Mach. Int. 30, 11 (2008), 1958–1970.
 [TGY09] Talton J. O., Gibson D., Yang L., Hanrahan P., Koltun V.: Exploratory modeling with collaborative design spaces. ACM Trans. Graph. 28, 5 (2009).
 [TMB14] Tsoli A., Mahmood N., Black M. J.: Breathing life into shape: Capturing, modeling and animating 3d human breathing. ACM Trans. Graph. 33, 4 (July 2014), 52:1–52:11.
 [Tri14] Trimble: Trimble 3D warehouse, http://sketchup.google.com/3Dwarehouse/ 2014.
 [TV08] Tangelder J. W., Veltkamp R. C.: A survey of content based 3d shape retrieval methods. Multimedia Tools and Applications 39, 3 (2008), 441–471.
 [TYK12] Talton J., Yang L., Kumar R., Lim M., Goodman N., Měch R.: Learning design patterns with bayesian grammar induction. pp. 63–74.
 [vKTS11] van Kaick O., Tagliasacchi A., Sidi O., Zhang H., CohenOr D., Wolf L., , Hamarneh G.: Prior knowledge for part correspondence. Computer Graphics Forum 30, 2 (2011), 553–562.
 [vKXZ13] van Kaick O., Xu K., Zhang H., Wang Y., Sun S., Shamir A., CohenOr D.: Cohierarchical analysis of shape structures. ACM Trans. Graph. 32, 4 (2013), 69:1–10.
 [vKZHCO11] van Kaick O., Zhang H., Hamarneh G., CohenOr D.: A survey on shape correspondence. Comput. Graph. Forum 30, 6 (2011), 1681–1707.
 [WAvK12] Wang Y., Asafi S., van Kaick O., Zhang H., CohenOr D., Chen B.: Active coanalysis of a set of shapes. ACM Trans. Graph. 31, 6 (2012), 165:1–165:10.
 [WBLP11] Weise T., Bouaziz S., Li H., Pauly M.: Realtime performancebased facial animation. ACM Trans. Graph. 30, 4 (2011), 77:1–77:10.
 [WGW13] Wang Y., Gong M., Wang T., CohenOr D., Zhang H., Chen B.: Projective analysis for 3d shape segmentation. ACM Trans. Graph. 32, 6 (2013), 192:1–192:12.
 [WHG13] Wang F., Huang Q., Guibas L.: Image cosegmentation via consistent functional maps. In Proc. ICCV (2013).
 [WLVGP09] Weise T., Li H., Van Gool L., Pauly M.: Face/off: Live facial puppetry. In Proc. of Symp. on Comp. Anim. (2009), pp. 7–16.
 [WS13] Wang L., Singer A.: Exact and stable recovery of rotations for robust synchronization. Information and Inference 2, 2 (2013), 145–193.
 [XCF13] Xu K., Chen K., Fu H., Sun W.L., Hu S.M.: Sketch2scene: Sketchbased coretrieval and coplacement of 3d models. ACM Trans. Graph. 32, 4 (2013), 123:1–12.
 [XKHK14] Xu K., Kim V., Huang Q., Kalogerakis E.: Online resources: Datadriven shape analysis and processing. (Website under construction), 2014.
 [XLZ10] Xu K., Li H., Zhang H., CohenOr D., Xiong Y., Cheng Z.Q.: Stylecontent separation by anisotropic part scales. ACM Trans. Graph. 29, 5 (2010), 184:1–184:10.
 [XMZ14] Xu K., Ma R., Zhang H., Zhu C., Shamir A., CohenOr D., Huang H.: Organizing heterogeneous scene collections through contextual focal points. ACM Trans. Graph. 33, 4 (2014), 35:1–12.
 [XSX14] Xu W., Shi Z., Xu M., Zhou K., Wang J., Zhou B., Wang J., Yuan Z.: Transductive 3d shape segmentation using sparse reconstruction. Comp. Graph. F. 33, 5 (2014).
 [XXLX14] Xie Z., Xu K., Liu L., Xiong Y.: 3d shape segmentation and labeling via extreme learning machine. Computer Graphics Forum 33, 5 (2014).
 [XXM13] Xie X., Xu K., Mitra N. J., CohenOr D., Gong W., Su Q., Chen B.: Sketchtodesign: Contextbased part assembly. Comp. Graph. Forum 32, 8 (2013), 233–245.
 [XZCOC12] Xu K., Zhang H., CohenOr D., Chen B.: Fit and diverse: Set evolution for inspiring 3d shape galleries. ACM Trans. Graph. 31, 4 (2012), 57:1–57:10.
 [XZZ11] Xu K., Zheng H., Zhang H., CohenOr D., Liu L., Xiong Y.: Photoinspired modeldriven 3d object modeling. ACM Trans. Graph. 30, 4 (2011), 80:1–80:10.
 [YCM14] Yumer M. E., Chun W., Makadia A.: Cosegmentation of textured 3d shapes with sparse annotations. In Proc. CVPR (2014).
 [YK12] Yumer M. E., Kara L. B.: Coabstraction of shape collections. ACM Trans. Graph. 31, 6 (2012), 166:1–166:11.
 [YK14] Yumer M., Kara L.: Coconstrained handles for deformation in shape collections. ACM Trans. Graph. 32, 6 (2014), to appear.
 [YN10] Yu K., Ng A.: Feature learning for image classification. In ECCV tutorials (2010).
 [ZKP10] Zach C., Klopschitz M., Pollefeys M.: Disambiguating visual relations using loop constraints. In Proc. CVPR (2010).
 [ZMT05] Zhang E., Mischaikow K., Turk G.: Featurebased Surface Parameterization and Texture Mapping. ACM Trans. Graph. 24, 1 (2005).
 [ZSSS13] Zia M. Z., Stark M., Schiele B., Schindler K.: Detailed 3d representations for object recognition and modeling. IEEE Trans. Pat. Ana. & Mach. Int. 35, 11 (2013), 2608–2623.
 [ZWK14] Zhao X., Wang H., Komura T.: Indexing 3d scenes using the interaction bisector surface. ACM Trans. Graph. 33, 3 (2014), 22:1–22:15.
Comments
There are no comments yet.