The recent surge of interest in the spectral analysis of the Laplace-Beltrami operator (LBO) [Rosenberg:97] has resulted in a glut of spectral shape signatures that have been successfully applied to a broad range of areas, including object recognition and deformable shape analysis [Levy:06, Reuter:06, Rustamov:07, Bronstein:11, Chunyuan:13b, Biasotti:SHREC17, Rodola:SHREC17], multimedia protection [Tarmissi:09], and shape classification [Gao:14]
. The diversified nature of these applications is a powerful testimony of the practical usage of spectral shapes signatures, which are usually defined as feature vectors representing local and/or global characteristics of a shape and may be broadly classified into two main categories: local and global descriptors. Local descriptors (also called point signatures) are defined on each point of the shape and often represent the local structure of the shape around that point, while global descriptors are usually defined on the entire shape.
Most point signatures may easily be aggregated to form global descriptors by integrating over the entire shape. Rustamov [Rustamov:07]
proposed a local feature descriptor referred to as the global point signature (GPS), which is a vector whose components are scaled eigenfunctions of the LBO evaluated at each surface point. The GPS signature is invariant under isometric deformations of the shape, but it suffers from the problem of eigenfunctions’ switching whenever the associated eigenvalues are close to each other. This problem was lately well handled by the heat kernel signature (HKS)[Sun:09], which is a temporal descriptor defined as an exponentially-weighted combination of the LBO eigenfunctions. HKS is a local shape descriptor that has a number of desirable properties, including robustness to small perturbations of the shape, efficiency and invariance to isometric transformations. The idea of HKS was also independently proposed by Gȩbal et al. [Gebal:09] for 3D shape skeletonization and segmentation under the name of auto diffusion function. To give rise to substantially more accurate matching than HKS, the wave kernel signature (WKS) [Aubry:11]
was proposed as an alternative in an effort to allow access to high-frequency information. Using the Fourier transform’s magnitude, Bronstein and Kokkinos[Kokkinos:10] introduced the scale invariant heat kernel signature (SIHKS), which is constructed based on a logarithmically sampled scale-space.
One of the simplest spectral shape signatures is Shape-DNA [Reuter:06], which is an isometry-invariant global descriptor defined as a truncated sequence of the LBO eigenvalues arranged in increasing order of magnitude. Gao et al. [Gao:14] developed a variant of Shape-DNA, referred to as compact Shape-DNA (cShape-DNA), which is an isometry-invariant signature resulting from applying the discrete Fourier transform to the area-normalized eigenvalues of the LBO. Chaudhari et al. [Chaudhari:14] presented a slightly modified version of the GPS signature by setting the LBO eigenfunctions to unity. This signature, called GPS embedding, is defined as a truncated sequence of inverse square roots of the area-normalized eigenvalues of the LBO. A comprehensive list of spectral descriptors can be found in [Lian:13, Chunyuan:14b].
From the graph Fourier perspective, it can be seen that HKS is highly dominated by information from low frequencies, which correspond to macroscopic properties of a shape. Wavelet analysis has some major advantages over Fourier transform, which makes it an interesting alternative for many applications. In particular, unlike the Fourier transform, wavelet analysis is able to perform local analysis and also makes it possible to perform a multiresolution analysis. Classical wavelets are constructed by translating and scaling a mother wavelet, which is used to generate a set of functions through the scaling and translation operations. The wavelet transform coefficients are then obtained by taking the inner product of the input function with the translated and scaled waveforms. The application of wavelets to graphs (or triangle meshes) is, however, problematic and not straightforward due in part to the fact that it is unclear how to apply the scaling operation on a signal (or function) defined on the mesh vertices. To tackle this problem, Coifman and Lafon [Coifman:06] introduced the diffusion wavelets, which generalize the classical wavelets by allowing for multiscale analysis on graphs. The construction of diffusion wavelets interacts with the underlying graph through repeated applications of a diffusion operator, which induces a scaling process. Hammond et al. [Hammond:11] showed that the wavelet transform can be performed in the graph Fourier domain, and proposed a spectral graph wavelet transform that is defined in terms of the eigensystem of the graph Laplacian matrix. More recently, a spectral graph wavelet signature (SGWS) was introduced in [Chunyuan:13b, Chunyuan:13c, Masoumi:16]. SGWS is a multiresolution local descriptor that is not only isometric invariant, but also compact, easy to compute and combines the advantages of both band-pass and low-pass filters.
A popular approach for transforming local descriptors into global representations that can be used for 3D shape recognition and classification is the bag-of-features (BoF) model [Bronstein:11]
. The task in the shape classification problem is to assign a shape to a class chosen from a predefined set of classes. The BoF model represents each shape in the training dataset as a collection of unordered feature descriptors extracted from local areas of the shape, just as words are local features of a document. A baseline BoF approach quantizes each local descriptor to its nearest cluster center using K-means clustering and then encodes each shape as a histogram over cluster centers by counting the number of assignments per cluster. These cluster centers form a visual vocabulary or codebook whose elements are often referred to as visual words or codewords. Although the BoF paradigm has been shown to provide significant levels of performance, it does not, however, take into consideration the spatial relations between features, which may have an adverse effect not only on its descriptive ability but also on its discriminative power. To account for the spatial relations between features, Bronsteinet al. introduced a generalization of a bag of features, called spatially sensitive bags of features (SS-BoF) [Bronstein:11]. The SS-BoF is a global descriptor defined in terms of mid-level features and the heat kernel, and can be represented by a square matrix whose elements represent the frequency of appearance of nearby codewords in the vocabulary. In the same spirit, Bu et al. [Bu:14] recently proposed the geodesic-aware bags of features (GA-BoF) for 3D shape classification by replacing the heat kernel in SS-BoF with a geodesic exponential kernel.
In this paper, we propose a 3D shape classification approach, called SGWC-BoF, which employs spectral graph wavelet codes (SGWC) obtained from spectral graph wavelet signatures (i.e. local descriptors) via the soft-assignment coding step of the BoF model in conjunction with a geodesic exponential kernel for capturing the spatial relations between features. Shape classification [Masoumi:17] is the process of organizing a dataset of shapes into a known number of classes, and the task is to assign new shapes to one of these classes. In addition to taking into consideration the spatial relations between features via a geodesic exponential kernel, the proposed approach performs classification on spectral graph wavelet codes, thereby seamlessly capturing the similarity between these mid-level features. We not only show that our formulation allows us to take into account the spatial layout of features, but we also demonstrate that the proposed framework yields better classification accuracy results compared to state-of-the-art methods, while remaining computationally attractive. The main contributions of this paper may be summarized as follows:
We present local shape descriptors using multiresolution analysis of spectral graph wavelets.
We construct mid-level features by embedding the local shape descriptors into the visual vocabulary space using the soft assignment coding step of the bag-of-features paradigm.
We introduce a global descriptor, which is constructed by aggregating mid-level features weighted by a geodesic exponential kernel.
The remainder of this paper is organized as follows. In Section 2, we briefly overview the Laplace-Beltrami operator and spectral signatures. In Section 3, we introduce a three-step feature description framework for 3D shape classification, and we discuss in detail its main algorithmic steps. Experimental results are presented in Section 4. Finally, we conclude in Section 5 and point out some future work directions.
A 3D shape is usually modeled as a triangle mesh whose vertices are sampled from a Riemannian manifold. A triangle mesh may be defined as a graph or , where is the set of vertices, is the set of edges, and is the set of triangles. Each edge connects a pair of vertices . Two distinct vertices are adjacent (denoted by or simply ) if they are connected by an edge, i.e. .
2.1 Laplace-Beltrami Operator
Given a compact Riemannian manifold , the space of all smooth, square-integrable functions on is a Hilbert space endowed with inner product , for all , where (or simply ) denotes the measure from the area element of a Riemannian metric on . Given a twice-differentiable, real-valued function , the Laplace-Beltrami operator (LBO) is defined as , where is the intrinsic gradient vector field and is the divergence operator [Rosenberg:97]. The LBO is a linear, positive semi-definite operator acting on the space of real-valued functions defined on , and it is a generalization of the Laplace operator to non-Euclidean spaces.
Discretization. A real-valued function defined on the mesh vertex set may be represented as an -dimensional vector , where the th component denotes the function value at the th vertex in . Using a mixed finite element/finite volume method on triangle meshes [Meyer:03], the value of at a vertex (or simply ) can be approximated using the cotangent weight scheme as follows:
where and are the angles and of two triangles and that are adjacent to the edge , and is the area of the Voronoi cell at vertex . It should be noted that the cotangent weight scheme is numerically consistent and preserves several important properties of the continuous LBO, including symmetry and positive semi-definiteness [Wardetzky:07].
Spectral Analysis. The matrix associated to the discrete approximation of the LBO is given by , where is a positive definite diagonal matrix (mass matrix), and is a sparse symmetric matrix (stiffness matrix). Each diagonal element is the area of the Voronoi cell at vertex , and the weights are given by
where and are the opposite angles of two triangles that are adjacent to the edge .
The eigenvalues and eigenvectors ofcan be found by solving the generalized eigenvalue problem using for instance the Arnoldi method of ARPACK111ARPACK (ARnoldi PACKage) is a MATLAB library for computing the eigenvalues and eigenvectors of large matrices., where are the eigenvalues and are the unknown associated eigenfunctions (i.e. eigenvectors which can be thought of as functions on the mesh vertices). We may sort the eigenvalues in ascending order as with associated orthonormal eigenfunctions , where the orthogonality of the eigenfunctions is defined in terms of the -inner product, i.e.
for all . We may rewrite the generalized eigenvalue problem in matrix form as , where is an diagonal matrix with the on the diagonal, and is an orthogonal matrix whose th column is the unit-norm eigenvector .
2.2 Spectral Shape Signatures
In recent years, several local descriptors based on the eigensystem of the LBO have been proposed in the 3D shape analysis literature, including the heat kernel signature (HKS) and wave kernel signature (WKS) [Sun:09, Aubry:11]. Both HKS and WKS have an elegant physical interpretation: the HKS describes the amount of heat remaining at a mesh vertex
after a certain time, whereas the WKS is the probability of measuring a quantum particle with the initial energy distribution at. The HKS at a vertex is defined as:
where and are the eigenvalues and eigenfunctions of the LBO.
The HKS contains information mainly from low frequencies, which correspond to macroscopic features of the shape; and thus exhibits a major discrimination ability in shape retrieval tasks. With multiple scaling factors , a collection of low-pass filters are established. The larger is , the more high frequencies are suppressed. However, different frequencies are always mixed in the HKS, and high-precision localization tasks may fail due in part to the suppression of the high frequency information, which corresponds to microscopic features. To circumvent these disadvantages, Aubry et al. [Aubry:11] introduced the WKS, which is defined at a vertex as follows:
where is a normalization constant. The WKS explicitly separates the influences of different frequencies, treating all frequencies equally. Thus, different spatial scales are naturally separated, making the high-precision feature localization possible.
Given a range of discrete scales , a bank of filters is constructed for each signature, and thus a vertex on the mesh surface can be described by a -dimensional point signature vector given by
In this section, we provide a detailed description of our proposed 3D shape classification method that utilizes spectral graph wavelets in conjunction with the BoF paradigm. Shape classification is the process of organizing a dataset of shapes into a known number of classes, and the task is to assign new shapes to one of these classes. It is common practice in classification to randomly split the available data into training and test sets. Classification aims to learn a classifier (also called predictor or classification model) from labeled training data. The training data consist of a set of training examples or instances that are labeled with predefined classes. The resulting, trained model is subsequently applied to the test data to classify future (unseen) data instances into these classes. The test data, which consists of data instances with unknown class labels, is used to evaluate the performance of the classification model and determine its accuracy in terms of the number of test instances correctly or incorrectly predicted by the model. A good classifier should result in high accuracy, or equivalently, in few misclassifications.
In our proposed framework, each 3D shape in the dataset is first represented by local descriptors, which are arranged into a spectral graph wavelet signature matrix. Then, we perform soft-assignment coding by embedding local descriptors into the visual vocabulary space, resulting in mid-level features which we refer to as spectral graph wavelet codes (SGWC). It is important to point out that the vocabulary is computed offline by concatenating all the spectral graph wavelet signature matrices into a data matrix, followed by applying the K-means algorithm to find the data cluster centers.
In a bid to capture the spatial relations between features, we compute a global descriptor of each shape in terms of a geodesic exponential kernel and mid-level features, resulting in a SGWC-BoF matrix which is then transformed into a SGWC-BoF vector by stacking its columns one underneath the other. The last stage of the proposed approach is to perform classification on the SGWC-BoF vectors using a classification algorithm. The flowchart of the proposed framework is depicted in Figure 1
. Multiclass support vector machines (SVMs) are widely used supervised learning methods for classification. Supervised learning algorithms consist of two main steps: training step and test step. In the training step, a classification model (classifier) is learned from the training data by a learning algorithm (e.g., SVMs). In the test step, the learned model is evaluated using a set of test data to predict the class labels for the classifier and hence assess the classification accuracy.
3.1 Spectral Graph Wavelet Transform
For any graph signal , the forward and inverse graph Fourier transforms (also called manifold harmonic and inverse manifold harmonic transforms) are defined as
respectively, where is the value of at eigenvalue (i.e. ). In particular, the graph Fourier transform of a delta function centered at vertex is given by
The forward and inverse graph Fourier transforms may be expressed in matrix-vector multiplication as follows:
Wavelet Function. The spectral graph wavelet transform is determined by the choice of a spectral graph wavelet generating kernel , which is analogous to the Fourier domain wavelet. To act as a band-pass filter, the kernel should satisfy and .
Let be a given kernel function and denote by the wavelet operator at scale . Similar to the Fourier domain, the graph Fourier transform of is given by
where acts as a scaled band-pass filter. Thus, the inverse graph Fourier transform is given by
Applying the wavelet operator to a delta function centered at vertex (i.e. ), the spectral graph wavelet localized at vertex and scale is then given by
This indicates that shifting the wavelet to vertex corresponds to a multiplication by . It should be noted that is able to modulate the spectral wavelets only for within the domain of the spectrum of the LBO. Thus, an upper bound on the largest eigenvalue is required to provide knowledge on the spectrum in practical applications.
Hence, the spectral graph wavelet coefficients of a given function can be generated from its inner product with the spectral graph wavelets:
Scaling Function. Similar to the low-pass scaling functions in the classical wavelet analysis, a second class of waveforms are used as low-pass filters to better encode the low-frequency content of a function defined on the mesh vertices. To act as a low-pass filter, the scaling function should satisfy and as . Similar to the wavelet kernels, the scaling functions are given by
and their spectral coefficients are
A major advantage of using the scaling function is to ensure that the original signal can be stably recovered when sampling scale parameter with a discrete number of values . As demonstrated in [Hammond:11], given a set of scales , the set forms a spectral graph wavelet frame with bounds
The stable recovery of is ensured when and are away from zero. Additionally, the crux of the scaling function is to smoothly represent the low-frequency content of the signal on the mesh. Thus, the design of the scaling function is uncoupled from the choice of the wavelet generating kernel .
3.2 Local Descriptors
Wavelets are useful in describing functions at different levels of resolution. To characterize the localized context around a mesh vertex , we assume that the signal on the mesh is a unit impulse function, that is at each mesh vertex . Thus, it follows from (12) that the spectral graph wavelet coefficients are
and that the coefficients of the scaling function are
Following the multiresolution analysis, the spectral graph wavelet and scaling function coefficients are collected to form the spectral graph wavelet signature at vertex as follows:
where is the resolution parameter, and is the shape signature at resolution level given by
The wavelet scales () are selected to be logarithmically equispaced between maximum and minimum scales and , respectively. Thus, the resolution level determines the resolution of scales to modulate the spectrum. At resolution , the spectral graph wavelet signature is a 2-dimensional vector consisting of two elements: one element, , of spectral graph wavelet function coefficients and another element, , of scaling function coefficients. And at resolution , the spectral graph wavelet signature is a 5-dimensional vector consisting of five elements (four elements of spectral graph wavelet function coefficients and one element of scaling function coefficients). In general, the dimension of a spectral graph wavelet signature at vertex can be expressed in terms of the resolution as follows:
Hence, for a -dimensional signature , we define a spectral graph wavelet signature matrix as , where is the signature at vertex and is the number of mesh vertices. In our implementation, we used the Mexican hat wavelet as a kernel generating function . In addition, we used the scaling function given by
where and is set such that has the same value as the maximum value of . The maximum and minimum scales are set to and .
The geometry captured at each resolution of the spectral graph wavelet signature can be viewed as the area under the curve shown in Figure 2. For a given resolution , we can understand the information from a specific range of the spectrum as its associated areas under . As the resolution increases, the partition of spectrum becomes tighter, and thus a larger portion of the spectrum is highly weighted.
|(a) Heat kernel||(b) Wave kernel|
|(c) Mexican hat kernel for R=1||(d) Mexican hat kernel for R=2|
|(e) Mexican hat kernel for R=4||(f) Mexican hat kernel for R=4|
|(g) Mexican hat kernel for R=5||(h) Mexican hat kernel for R=6|
3.3 Mid-Level Features
The BoF model aggregates local descriptors of a shape in an effort to provide a simple representation that may be used to facilitate comparison between shapes. We model each 3D shape as a triangle mesh with
vertices. The BoF model consists of four main steps: feature extraction and description, codebook design, feature coding and feature pooling. The idea of the BoF paradigm on 3D shapes is illustrated in Figure3.
|Feature extraction||Feature description||Vector quantization||Bag of features|
Feature Extraction and Description. In the BoF paradigm, a 3D shape is represented as a collection of local descriptors of the same dimension , where the order of different feature vectors is of no importance. Local descriptors may be classified into two main categories: dense and sparse. Dense descriptors are computed at each point (vertex) of the shape, while sparse descriptors are computed by identifying a set of salient points using a feature detection algorithm. In our proposed framework, we represent by a matrix of spectral graph wavelet signatures, where each -dimensional feature vector is a dense, local descriptor that encodes the local structure around the th vertex of .
Codebook Design. A codebook (or visual vocabulary) is constructed via clustering by quantizing the local descriptors (i.e. spectral graph wavelet signatures) into a certain number of codewords. These codewords are usually defined as the centers of
clusters obtained by performing an unsupervised learning algorithm (e.g., vector quantization via K-means clustering) on the signature matrix. The codebook is the set of size , which may be represented by a vocabulary matrix .
Feature Coding. The goal of feature coding is to embed local descriptors in the vocabulary space. Each spectral graph wavelet signature is mapped to a codeword in the codebook via the cluster soft-assignment matrix whose elements are given by
where denotes the -norm, and is a smoothing parameter that controls the softness of the assignment. Unlike hard-assignment coding in which a local descriptor is assigned to the nearest cluster, soft-assignment coding assigns descriptors to every cluster center with different probabilities in an effort to improve quantization properties of the coding step. We refer to the coefficient vector as the spectral graph wavelet code (SGWC) of the descriptor , with being the coefficient with respect to the codeword .
Histogram Representation (Feature Pooling). Each spectral graph wavelet signature is mapped to a certain codeword through the clustering process and the shape is then represented by the histogram of the codewords, which is a -dimensional vector given by
where . That is, the histogram consists of the column-sums of the cluster assignment matrix
. Other feature pooling methods include average- and max-pooling. In general, any predefined pooling function that aggregates the information of different codewords into a single feature vector can be used.
3.4 Global Descriptors
A major drawback of the BoF model is that it only considers the distribution of the codewords and disregards all information about the spatial relations between features, and hence the descriptive ability and discriminative power of the BoF paradigm may be negatively impacted. To circumvent this limitation, various solutions have been recently proposed including the spatially sensitive bags of features (SS-BoF) [Bronstein:11] and geodesic-aware bags of features (GA-BoF) [Bu:14]. The SS-BoF, which is defined in terms of mid-level features and the heat kernel, can be represented by a square matrix whose elements represent the frequency of appearance of nearby codewords in the vocabulary. Similarly, the GA-BoF matrix is obtained by replacing the heat kernel in the SS-BoF with a geodesic exponential kernel. Unlike the heat kernel which is time-dependent, the geodesic exponential kernel avoids the possible effect of time scale and shape size [Bu:14]. In the same vein, we define a global descriptor of a shape as a SGWC-BoF matrix defined in terms of spectral graph wavelet codes and a geodesic exponential kernel as follows:
where is a matrix of spectral graph wavelet codes (i.e. mid-level features), and is an geodesic exponential kernel matrix whose elements are given by
with denoting the geodesic distance between any pair of mesh vertices and , and is a positive, carefully chosen parameter that determines the width of the kernel. Intuitively, the parameter controls the linearity of the kernel function, i.e. the larger the width, the linear the function. It is worth pointing out that the proposed SGWC-BoF is similar in spirit to SS-BoF and GA-BoF. The main distinction of our work is that we use multiresolution local descriptors that may be regarded as generalized signatures for those in [Bronstein:11, Bu:14]. In addition, our spectral graph wavelet signature combines the advantages of both band-pass and low-pass filters.
3.5 Multiclass Support Vector Machines
SVMs are supervised learning models that have proven effective in solving classification problems. SVMs are based upon the idea of maximizing the margin, i.e. maximizing the minimum distance from the separating hyperplane to the nearest example. Although SVMs were originally designed for binary classification, several extensions have been proposed in the literature to handle the multiclass classification. The idea of multiclass SVM is to decompose the multiclass problem into multiple binary classification tasks that can be solved efficiently using binary SVM classifiers. One of the simplest and most widely used coding designs for multiclass classification is the one-vs-all approach, which constructsbinary SVM classifiers such that for each binary classifier, one class is positive and the rest are negative. In other words, the one-vs-all approach requires binary SVM classifiers, where the th classifier is trained with positive examples belonging to class and negative examples belonging to the remaining classes. When testing an unknown example, the classifier producing the maximum output (i.e. largest value of the decision function) is considered the winner, and this class label is assigned to that example.
3.6 Proposed Algorithm
Shape classification is a supervised learning method that assigns shapes in a dataset to target classes. The objective of 3D shape classification is to accurately predict the target class for each 3D shape in the dataset. Our proposed 3D shape classification algorithm consists of four main steps. The first step is to represent each 3D shape in the dataset by a spectral graph wavelet signature matrix, which is a feature matrix consisting of local descriptors. More specifically, let be a dataset of shapes modeled by triangle meshes . We represent each 3D shape in the dataset by a spectral graph wavelet signature matrix , where is the -dimensional local descriptor at vertex and is the number of mesh vertices.
In the second step, the spectral graph wavelet signatures are mapped to high-dimensional mid-level feature vectors using the soft-assignment coding step of the BoF model, resulting in a matrix whose columns are the -dimensional mid-level feature codes (i.e. SGWC). In the third step, the SGWC-BoF matrix is computed using the mid-level feature codes matrix and a geodesic exponential kernel, followed by reshaping into a -dimensional global descriptor . In the fourth step, the SGWC-BoF vectors of all shapes in the dataset are arranged into a data matrix . Finally, a one-vs-all multiclass SVM classifier is performed on the data matrix to find the best hyperplane that separates all data points of one class from those of the other classes.
The task in multiclass classification is to assign a class label to each input example. More precisely, given a training data of the form , where is the th example (i.e. SGWC-BoF vector) and is its th class label, we aim at finding a learning model that contains the optimized parameters from the SVM algorithm. Then, the trained SVM model is applied to a test data , resulting in predicted labels of new data. These predicted labels are subsequently compared to the labels of the test data to evaluate the classification accuracy of the model.
To assess the performance of the proposed framework, we employed two commonly-used evaluation criteria, the confusion matrix and accuracy, which will be discussed in more detail in the next section. The main algorithmic steps of our approach are summarized in Algorithm 1.
It is important to point out that in our implementation the vocabulary is computed offline by applying the K-means algorithm to the matrix obtained by concatenating all SGWS matrices of all meshes in the dataset. As a result, the vocabulary is a matrix of size , where .
In this section, we conduct extensive experiments to evaluate the performance of the proposed SGWC-BoF framework for 3D shape classification. The effectiveness of our approach is validated by performing a comprehensive comparison with several state-of-the-art methods.
Datasets. The performance of the proposed framework is evaluated on two standard and publicly available 3D shape benchmarks: SHREC-2010 and SHREC-2011. Sample shapes from these two benchmarks are shown in Figure 4.
Performance Evaluation Measures. In practice, the available data (which has classes) for classification is usually split into two disjoint subsets: the training set for learning, and the test set for testing. The training and test sets are usually selected by randomly sampling a set of training instances from
for learning and using the rest of instances for testing. The performance of a classifier is then assessed by applying it to test data with known target values and comparing the predicted values with the known values. One important way of evaluating the performance of a classifier is to compute its confusion matrix (also called contingency table), which is amatrix that displays the number of correct and incorrect predictions made by the classifier compared with the actual classifications in the test set, where is the number of classes.
Another intuitively appealing measure is the classification accuracy, which is a summary statistic that can be easily computed from the confusion matrix as the total number of correctly classified instances (i.e. diagonal elements of confusion matrix) divided by the total number of test instances. Alternatively, the accuracy of a classification model on a test set may be defined as follows
where is the actual (true) label of , and is the label predicted by the classification algorithm. A correct classification means that the learned model predicts the same class as the original class of the test case. The error rate is equal to one minus accuracy.
Baseline Methods. For each of the 3D shape benchmarks used for experimentation, we will report the comparison results of our method against various state-of-the-art methods, including Shape-DNA [Reuter:06], compact Shape-DNA [Gao:14], GPS embedding [Chaudhari:14], GA-BoF [Bu:14], and F1-, F2-, and F3-features [Khabou:07]. The latter features, which are defined in terms of the Laplacian matrix eigenvalues, were shown to have good inter-class discrimination capabilities in 2D shape recognition [Gao:14], but they can easily be extended to 3D shape analysis using the eigenvalues of the LBO.
Implementation Details. The experiments were conducted on a desktop computer with an Intel Core i5 processor running at 3.10 GHz and 8 GB RAM; and all the algorithms were implemented in MATLAB. The appropriate dimension (i.e. length or number of features) of a shape signature is problem-dependent and usually determined experimentally. For fair comparison, we used the same parameters that have been employed in the baseline methods, and in particular the dimensions of shape signatures. In our setup, a total of 201 eigenvalues and associated eigenfunctions of the LBO were computed. For the proposed approach, we set the resolution parameter to (i.e. the spectral graph wavelet signature matrix is of size , where is the number of mesh vertices) and the kernel width of the geodesic exponential kernel to . Moreover, the parameter of the soft-assignment coding is computed as , where is the median size of the clusters in the vocabulary [Bronstein:11]. For shape-DNA, GPS embedding, and F1-, F2-, and F3-features, the selected number of retained eigenvalues equals . As suggested in [Gao:14], the dimension of the compact Shape-DNA signature was set to .
4.1 SHREC-2010 Dataset
SHREC-2010 is a dataset of 3D shapes consisting of 200 watertight mesh models from 10 classes [Lian:SHREC10]. These models are selected from the McGill Articulated Shape Benchmark dataset. Each class contains 20 objects with distinct postures. Moreover, each shape in the dataset has approximately vertices.
Performance Evaluation. We randomly selected 50% shapes in the SHREC-2010 dataset to hold out for the test set, and the remaining shapes for training. That is, the test data consists of 100 shapes. A one-vs-all multiclass SVM is first trained on the training data to learn the model (i.e. classifier), which is subsequently used on the test data with known target values in order to predict the class labels. Figure 5 displays the confusion matrix for SHREC-2010 on the test data. This confusion matrix shows how the predictions are made by the model. Its rows correspond to the actual (true) class of the data (i.e. the labels in the data), while its columns correspond to the predicted class (i.e. predictions made by the model). The value of each element in the confusion matrix is the number of predictions made with the class corresponding to the column for instances with the correct value as represented by the row. Thus, the diagonal elements show the number of correct classifications made for each class, and the off-diagonal elements show the errors made. As can be seen in Figure 5, the proposed approach was able to accurately classify all shapes in the test data, except the hand, octopus and spider models which were misclassified only once as teddy, crab and ant, respectively. Also, the human shape was misclassified three times as a spider. Such a good performance strongly suggests that our method captures well the discriminative features of the shapes.
Results. In our approach, each 3D shape in the SHREC-2010 dataset is represented by a matrix of spectral graph wavelet signatures. Setting the number of codewords to , we computed offline a vocabulary matrix via K-means clustering. The pre-computation of the vocabulary of size 128 took approximately 15 minutes. The soft-assignment coding of the BoF model yields a matrix of spectral graph wavelet codes, resulting in a SGWC-BoF data matrix of size . Figure 6 shows the spectral graph wavelet code matrices of two shapes from two different classes of SHREC-2010. As can be seen, the global descriptors are quite different and hence they may be used efficiently to discriminate between shapes in classification tasks.
We compared the proposed method to Shape-DNA, compact shape-DNA, GPS embedding, and F1-, F2-, and F3-features. In order to compute the accuracy, we repeated the experimental process 10 times with different randomly selected training and test data in an effort to obtain reliable results, and the accuracy for each run was recorded, then we selected the best result of each method. The classification accuracy results are summarized in Table 1, which shows the results of the baseline methods and the proposed framework. As can be seen, our SGWC-BOF method achieves better performance than Shape-DNA, compact Shape-DNA, GPS embedding, GA-BoF, and F1-, F2-, and F3-features. The proposed approach yields the highest classification accuracy of 95.66%, with performance improvements of 2.76% and 4.70% over the best baseline methods cShape-DNA and Shape-DNA, respectively. To speed-up experiments, all shape signatures were computed offline, albeit their computation is quite inexpensive due in large part to the fact that only a relatively small number of eigenvalues of the LBO need to be calculated.
|Method||Average accuracy %|
4.2 SHREC-2011 Dataset
SHREC-2011 is a dataset of 3D shapes consisting of 600 watertight mesh models, which are obtained from transforming 30 original models [Lian:SHREC11]. Each shape in the dataset has approximately vertices.
Performance Evaluation. We randomly selected 50% shapes in the SHREC-2011 dataset to hold out for the test set, and the remaining shapes for training. That is, the test data consists of 300 shapes. First, we trained a one-vs-all multiclass SVM on the training data to learn the classification model. Then, we used the resulting, trained model on the test data to predict the class labels. As can be seen in Figure 7, all shapes were classified correctly, except the horse, man and paper models, which were misclassified once as dog1, hand and bird1, respectively. Moreover, the ant shape was misclassified nine times as a spider. Therefore, the proposed approach was able to accurately classify all shapes in the test data, as shown in Figure 7.
Results. Following the setting of the previous experiment, each 3D shape in the SHREC-2011 dataset is represented by a spectral graph wavelet signature matrix. We pre-computed offline a vocabulary of size , and it took about 70 minutes. The soft-assignment coding yields a matrix of mid-level features. Hence, the SGWC-BoF data matrix for SHREC-2011 is of size . We repeated the experimental process 10 times with different randomly selected training and test data in an effort to obtain reliable results, and the accuracy for each run was recorded. The average accuracy results are reported in Table 2. As can be seen, the proposed method performs the best compared to all the seven baseline methods. The highest classification accuracy of 97.66% corresponds to our method, with performance improvements of 4.77% and 3.25% over the best performing baseline methods Shape-DNA and cShape-DNA, respectively.
|Method||Average accuracy %|
4.3 Parameter Sensitivity
The proposed approach depends on two key parameters that affect its overall performance. The first parameter is the kernel width of the geodesic exponential kernel. The second parameter is the size of the vocabulary, which plays an important role in the SGWC-BoF matrix . As shown in Figure 8, the best classification accuracy on SHREC-2011 is achieved using and . In addition, the classification performance of the proposed method is satisfactory for a wide range of parameter values, indicating the robustness of the proposed framework to the choice of these parameters.
In this paper, we introduced a spectral graph wavelet framework for 3D shape classification that employs the bag-of-features paradigm in an effort to design a global shape descriptor defined in terms of mid-level features and a geodesic exponential kernel. An important facet of our approach is the ability to combine the advantages of wave and heat kernel signatures into a compact yet discriminative descriptor, while allowing a multiresolution representation of shapes. The proposed spectral shape descriptor also combines the advantages of both band-pass and low-pass filters. In addition to taking into consideration the spatial relations between features via a geodesic exponential kernel, the proposed approach performs classification on spectral graph wavelet codes, thereby seamlessly capturing the similarity between these midlevel features. We not only showed that our formulation allows us to take into account the spatial layout of features, but we also demonstrated that the proposed framework yields better classification accuracy results compared to state-of-the-art methods, while remaining computationally attractive. This better performance is largely attributed to the discriminative global descriptor constructed by aggregating mid-level features weighted by a geodesic exponential kernel. Extensive experiments were carried out on two standard 3D shape benchmarks to demonstrate the effectiveness of the proposed method and its robustness to the choice of parameters. We evaluated the results using several metrics, including the confusion matrix and average accuracy. For future work, we plan to apply the proposed approach to other 3D shape analysis problems.