Techniques for analyzing 3D shapes are becoming increasingly important due to the vast number of sensors that are capturing 3D data, as well as numerous computer graphics applications. In recent years a variety of deep architectures have been approached for classifying 3D shapes. These range from multiview approaches that render a shape from a set of views and deploy image-based classifiers, to voxel-based approaches that analyze shapes represented as a 3D occupancy grid, to point-based approaches that classify shapes represented as collection of points. However, there is relatively little work that studies the tradeoffs offered by these modalities and their associated techniques.
This paper aims to study three of these tradeoffs, namely the ability to generalize from a few examples, computational efficiency, and robustness to adversarial transformations. We pick a representative technique for each modality. For multiview representation we choose the Multiview CNN (MVCNN) architecture ; For voxel-based representation we choose the VoxNet [17, 37] constructed using convolutions and pooling operations on a 3D grid; For point-based representation we choose the PointNet architecture . The analysis is done on the widely-used ModelNet40 shape classification benchmark .
Some of our analysis leads to surprising results. For example, with deeper architectures and a modification in the rendering technique that renders with black background and better centers the object in the image the performance of a vanilla MVCNN can be improved to 95.0%
per-instance accuracy on the benchmark, outperforming several recent approaches. Another example is that while it is widely believed that the strong performance of MVCNN is due to the use of networks pretrained on large image datasets (e.g., ImageNet), we find that even without such pretraining the MVCNN obtains 91.3% accuracy, outperforming several voxel-based and point-based counterparts that also do not rely on such pretraining. Furthermore, the performance of MVCNN remains at 93.6% even when trained with binary silhouettes (instead of shaded images) of shapes, suggesting that shading offer relatively little extra information on this benchmark for MVCNN.
We then systematically analyze the generalization ability of the models. First we analyze the accuracy of various models by varying the number of training examples per category. We find that the multiview approaches generalize faster obtaining near optimal performance with far fewer examples compared to the other approaches. We then analyze the role of initialization of these networks. As 3D shape datasets are currently lacking in comparison to large image datasets, we employ cross-modal distillation techniques [10, 7] to guide learning. In particular we use representations extracted from pretrained MVCNNs to guide learning of voxel-based and point-based networks. Cross-modal distillation improves the performance of VoxNet and PointNet, especially when training data is limited.
Finally we analyze the robustness of these classifiers to adversarial perturbations. While generating adversarial inputs to VoxNet and PointNet is straightforward, it is not the case for multiview methods due to the rendering step. To this end we design an end-to-end differentiable MVCNN that takes an input a voxel representation and generates a set of views using a differentiable renderer. We analyze the robustness of these networks by estimating the amount of perturbation needed to obtain a misclassification. We find that PointNet is more robust, while MVCNN and VoxNet are both easily fooled by adding a small amount of noise. This is similar to the observations in prior work of adversarial inputs for image-based networks[33, 6, 16]. Somewhat surprisingly ImageNet pretraining reduces the robustness of the MVCNNs to adversarial perturbations.
In summary, we performed a detailed analysis of several recently proposed approaches for 3D shape classification. This resulted in a new state-of-the-art of 95.0% on the ModelNet40 benchmark. The technical contributions include the use of cross-modal distillation for improving networks that operate on voxel-based and point-based representations and a novel approach for generating adversarial inputs for 3D shape classification for multiview approaches using a differentiable renderer. This allows us to directly compare and generate adversarial inputs for voxel-based and view-based methods using gradient-based techniques. The conclusion is that while PointNet architecture is less accurate, the use of orderless aggregation mechanism likely makes it more robust to adversarial perturbations compared to VoxNet and MVCNN, both of which are easily fooled.
This Section describes the protocol for evaluating the performance of various classifiers. We describe the dataset, performance metrics, and training setup in Section 2.1, followed by the details of the deep classifiers we consider in Section 2.2, and the approach for generating adversarial examples in Section 2.3.
2.1 3D Shape Classification
Classification benchmark. All our evaluation is done on the ModelNet40 shape classification benchmark  following the standard training and test splits provided in the dataset. There are 40 categories with 9483 training models and 2468 test models. The numbers of models are not equal across classes hence we report both the per-instance and per-class accuracy on the test set. While most of the literature report results by training on the entire training set, some earlier work, notably  reports results on training and evaluation on a subset consisting of 80 training and 20 test examples per category.
|(a) Input||(b) Voxel||(c) Point cloud||(d) Phong||(e) Depth||(f) Silhouette|
Input representations. The dataset presents each shape as a collection of triangles, hence it is important to describe the exact way in which these are converted to point clouds, voxels, and images for input to different network architectures. These inputs are visualized in Figure 1 and described below:
Voxel representation. To get voxel representations we follow the dataset from  where models are discretized to a occupancy grid. The data is available from the author’s page.
Point cloud. For point cloud representation we use the data from the PointNet approach  where 2048 points are uniformly sampled for each model.
Image representation. To generate multiple views of the model we use a setup similar to . Since the models are assumed to be upright oriented a set of virtual cameras are placed at 12 radially symmetric locations, i.e. every 30 degrees, facing the object center and at an elevation of 30 degrees. Comparing to , we render the images with black background and set the field-of-view of the camera such that the object is bounded by image canvas and rendered as an image of size . A similar scheme was used to generate views for semantic segmentation of shapes in the Shape PFCN approach . This had a non-negligible impact on the performance of the downstream models as discussed in Section 3.1. Given the setup we considered three different ways to render the models described below:
Depth rendering, where only the depth value is recorded.
Silhouette rendering, where images are rendered as binary images for pixels corresponding to foreground.
Data augmentation. Models in the dataset are upright oriented, but not consistently oriented along the axis, i.e., models could be rotated arbitrarily along the upright direction. Models that rely on voxel or point cloud input often benefit from rotation augmentation along the upright axis during training and testing. Similar to the multiview setting we consider models rotated by 30 degree increments as additional data during training, and optionally aggregating votes across these instances at test time.
2.2 Classification Architectures
We consider the following deep architectures for shape classification.
2.2.1 Multiview CNN (MVCNN)
The MVCNN architecture 
uses rendered images of the model from different views as input. Each image is fed into a CNN with shared weights. A max-pooling layer across different views is used to perform an orderless aggregation of the individual representations followed by several non-linear layers for classification. While the original paper used the VGG-M network  we also report results using:
2.2.2 Voxel network (VoxNet)
The VoxNet was first proposed in several early works [17, 37] that uses convolution and pooling layers defined on 3D voxel grids. The early VoxNet models  used two 3D convolutional layers and 2 fully-connected layers. In our initial experiments we found the capacity of this network is limited. We also experimented with the deeper VoxNet architecture proposed in  which has five blocks of (conv3d-batchnorm-LeakyReLU
) and includes batch normalization. All conv3d
layers have kernel size 5, stride 1 and channel size 32. TheLeakyReLU has slope . Two fully-connected layers (
fc-batchnorm-ReLU-fc) are added on top to obtain class predictions.
We also consider a hybrid model that takes voxels as input and uses a MVCNN approach for classification using a differentiable renderer. To achieve this we make two simplifications. First, only six renderings are considered corresponding to viewing directions along six dimensions . Second, the rendering is approximated using the approach suggested in PrGAN  where line integrals are used to compute pixel shading color. For example the line integral of a volume occupancy grid along the axis is given by . The idea is that the higher the sum of occupancy values along the axis, the closer the integral is to 1. The generated views for two models are shown in Figure 2. The renderings generated this way approximate silhouette renderings as described earlier. The primary advantage of this rendering method is that it’s differentiable, and hence we use this model to analyze the robustness of the MVCNN architecture to adversarial inputs (described in Section 2.3).
2.2.4 Point network (PointNet)
We follow the same architecture as PointNet  that operates on point cloud representations of a model. The architecture applies a series of non-linear mappings individually to each input point and performs orderless aggregations using max-pooling operations. Thus the model is invariant to the order in which the points are presented and can directly operate on point clouds without additional preprocessing such as spatial partitioning, or graph construction. Additionally, some initial layers are used to perform spatial transformations (rotation, scaling, etc.) Despite its simplicity the model and its variants have been shown to be effective at shape classification and segmentation tasks [32, 22].
Training details. All the MVCNN models are trained in two stages as suggested in . The model is first trained as a single-image classification task where the view-pooling layer is removed, then trained to jointly classify all the views with view-pooling layer in the second stage. We use the Adam optimizer  with learning rate and
for first and second stage respectively and each stage is trained with 30 epochs. The batch size is set to 64 and 96 (eight models with twelve views) for each stage and the weight decay parameter is set to. The VoxNet is trained with Adam optimizer with learning rate for 150 epochs. The batch size is set to 64 and weight decay parameter is . The VoxMVCNN is trained using the same procedure as MVCNN. For PointNet  we use the publicly available implementation released by the authors.
2.3 Generating Adversarial Inputs
Adversarial examples to image-based deep neural networks have been thoroughly explored in the literature[33, 6, 16, 19]. However, there is no prior work that addresses adversarial examples to deep neural networks based on 3D representations. Here we want to investigate if adversarial shapes can be obtained from different 3D shape recognition models, and perhaps more importantly which 3D representation is more robust to adversarial examples. We define an adversarial example as follows. Consider as the score that a classifier gives to an input belonging to a class . An adversarial example is a sample that is perceptually similar to , but . It is known from  that an effective way to compute adversarial examples to image-based models is the following. Given a sample from the dataset and a class that one wishes to maximize the score, an adversarial example can be computed as follows
where is the gradient of the classifier with respect to the input , and is the learning rate. For many image models, this single step procedure is able to generate perceptually indistinguishable adversarial examples. However, for some of the examples we experimented, a single step is not enough to generate a misclassification. Thus, we employ the following iterative procedure based on :
where is an operator that clips the values of to make sure the result will be in the -neighborhood of . Notice that this procedure is agnostic to the representation used by the input data . Thus, we use the same method to generate adversarial examples for multiple modalities of 3D representation: voxels, point clouds, multi-view images. For voxel grids, we also clip the values of to make sure their values are in . For multi-view representations, we need to make sure that all views are consistent with each other. We address this issue by using the VoxMVCNN architecture that generates multiple views from the same object through a differentiable renderer, i.e. line integral, as described in Section 2.2.3.
We begin by investigating the model generalization in Section 3.1. Section 3.2 analyzes the effect of different architectures and renderings for the MVCNN. Section 3.3 uses cross-modal distillation to improve the performance of VoxNet and PointNet. Section 3.4 compares the tradeoffs between different representations. Section 3.5 compares the robustness of different classifiers to adversarial perturbations. Finally, Section 3.6 puts the results presented in this paper in the context of prior work.
3.1 Learning From a Few Examples
One of the most desirable properties of a classifier is its ability to generalize from a few examples. We test this ability by evaluating the accuracy of different models as a function of training set size. We select the first models in the training set for each class, where
and is the number of models in class . The maximum number of models per-class in the training set of ModelNet40 is 889. Figure 3 shows the per-class and per-instance accuracy for three different models as a function of the training set size. The MVCNN with the VGG-11 architecture has better generalization than VoxNet and PointNet across all training set sizes. MVCNN obtains 77.8% accuracy using only 10 training models per class, while PointNet and VoxNet obtain 62.5% and 57.9% respectively. The performance of MVCNN is near optimal with 160 models per class, far fewer than PointNet and VoxNet. When using the whole dataset for training, MVCNN (95.0%) outperforms PointNet (89.1%) and VoxNet (85.6%) by a large margin.
Several improvements have been proposed for both point-based and voxel-based architectures. The best performing point-based models to the best of our knowledge is the Kd-Networks  which achieves 91.8% per-instance accuracy. For voxel-based models, O-CNN  uses sparse convolutions to handle higher resolution with Octave trees  and achieves 90.6% per-instance accuracy. However, all of them are far below the MVCNN approach. More details and comparison to the state-of-the-art are in Section 3.6.
3.2 Dissecting the MVCNN Architecture
Given the high performance of MVCNN we investigate what factors contribute to its performance as described next.
Effect of model architecture. The MVCNN model in  used VGG-M architecture. However a number of different image networks have since been proposed. We used different CNN architectures for MVCNN and report the accuracies in Table 1. All models have similar performance suggesting that MVCNN is robust across different CNN architectures. In Table 3 we also compare with the results using VGG-M and AlexNet. With the same shaded images and training subset, VGG-11 achieves 89.1% and VGG-M has 89.9% accuracy.
|Model||Per class||Per instance|
Effect of ImageNet pretraining. MVCNN benefits from transfer learning from ImageNet classification task. However, even without ImageNet pretraining, the MVCNN achieves 91.3% per-instance accuracy (Table 2). This is higher than several point-based and voxel-based approaches. Figure 3 plots the performance of the MVCNN with VGG-11 network without ImageNet pretraining across training set sizes showing this trend is true throughout the training regime. In Section 3.3 we study if ImageNet pretraining can benefit such approaches using cross-modal transfer learning.
|Model||Per class||Per instance|
|VGG-11 w/ ImageNet pretraining||92.4||95.0|
|VGG-11 w/o ImageNet pretraining||88.7||91.3|
Effect of shape rendering. We analyze the effect of different rendering approaches for input to a MVCNN model in Table 3. Sphere rendering proposed in  refers to rendering each point as a sphere and was shown to improve performance with AlexNet MVCNN architectures. We first compared the tight field-of-view rendering with black background in this work to the rendering in . Since  only reported results on the 80/20 training/test split, we first compared the performance of the VGG-11 networks using images from . The performance difference was negligible. However with our shaded images the performance of the VGG-11 network improves by more than 2%.
Using depth images, the per instance accuracy is 3.4% lower than using shaded images, but concatenating shaded images with depth images gives 1.2% improvement. Furthermore, we found the shading information only provides 1.4% improvements over the binary silhouette images. This suggests that most of the discriminative shape information used by the MVCNN approaches lie in the boundary of the object.
|Full training/test||80/20 training/test|
|Model||Rendering||Per class||Per instance||Per class||Per instance|
|VGG-M||Shaded from ||-||-||89.9||89.9|
|VGG-M||Shaded from  (80)||-||-||90.1||90.1|
|VGG-11||Shaded from ||-||-||89.1||89.1|
|VGG-11||Shaded + Depth||94.7||96.2|
|AlexNet||Sphere rendering (20)||89.7||92.0|
|AlexNet-MR||Sphere rendering (20)||91.4||93.8|
3.3 Cross Modal Distillation
Knowledge distillation [4, 10] was proposed for model compression tasks. They showed the performance of the model can be improved by training to imitate the output of a more accurate model. This technique has also been applied on transferring rich representations across modalities. For example, a model trained with images can be used to guide learning of a model for depth images , or to a model for sound waves . We investigate such techniques for learning across different 3D representations; Specifically from MVCNN model to PointNet and VoxNet models.
To do this we first train the ImageNet initialized VGG-11 network on the full training set. The logits (the last layer before the softmax) are extracted on the training set. A PointNet (or VoxNet) model is then trained to minimize
where and are the logits from the MVCNN model and from the model being trained respectively, is the class label of the input , is the softmax function, and is the cross-entropy loss . is the temperature for smoothing the targets. are set by grid search for . For example, in PointNet the best hyper-parameters are when training set is small, and when the training set is larger. In VoxNet we set in all cases. Figure 4 shows the result of training VoxNet and PointNet with distillation. For VoxNet the per instance accuracy is improved from 85.6% to 87.4% with whole training set; For PointNet the accuracy is improved from 89.1% to 89.4%. The improvement is slightly bigger when there is less training data.
3.4 Tradeoffs between Learned Representations
In this Section we analyze the tradeoffs between the different shape classifiers. Table 4 compares their speed, memory, and accuracy. The MVCNN model has more parameters and is slower, but the accuracy is 5.9% better than PointNet and 9.4% better than VoxNet. Even though the number of FLOPS are far higher for MVCNN the relative efficiency of 2D convolutions results in slightly longer evaluation time compared to VoxNet and PointNet.
We further use an ensemble model combining images, voxels, and point cloud representations. A simple way is to average the predictions from different models. As shown in Figure 5
, the ensemble of VoxNet and PointNet has better performance than using single model. However, the predictions from MVCNN dominate VoxNet and PointNet and gives no benefit for combining the predictions from other models with MVCNN. A more complex scheme where we trained a linear model on top of features extracted from penultimate layers of these networks did not provide any improvements either.
|Model||Forward-pass time||#params||Memory (GB)||Per class acc. (%)||Per ins. acc. (%)|
|VoxNet||1.3 ms||1.4M||2.0||81.4 (82.5)||85.6 (87.4)|
|PointNet||3.1 ms||3.5M||4.4||86.1 (86.7)||89.1 (89.4)|
Accuracy, speed and memory comparison of different models. Memory usage during training which includes parameters, gradients, and layer activations, for a batch size 64 is shown. Forward-pass time is also calculated with batch size 64 using PyTorch with a single GTX Titan X for all the models. The input resolutions arefor MVCNN, for VoxNet, and points for PointNet. The accuracy numbers in brackets are for models trained with distillation as described in Section 3.3.
3.5 Robustness to Adversarial Examples
In this section we analyze and compare the robustness of three shape classification models to adversarial examples. Adversarial examples are generated using the stochastic gradient ascent procedure described in Section 2.3. We search the threshold from to , and find the minimum value of where we can generate an adversarial example in 1000 iterations with learning rate .
To make a quantitative analysis between VoxMVCNN and VoxNet, we use the following procedure. Given an input and a classifier , the “hardest” target class is defined as . We select the first five models for each class from the test set. For each model, we select two target classes and where is VoxMVCNN and is VoxNet. There are total 400 pairs of test cases . We say an adversarial example can be found when with .
As shown in Table 5, we can generate 399 adversarial examples out of 400 test cases for VoxMVCNN, but only 370 adversarial examples for VoxNet. We then report the minimum where adversarial examples can be found in each test case. The average of VoxMVCNN is smaller than VoxNet, which suggests that VoxMVCNN is easier to be fooled by adversarial examples than VoxNet. We also use the VoxMVCNN model trained without ImageNet pre-training. Surprisingly, the model without pre-training is more robust to adversarial examples as the mean of is bigger than the model with pre-training. As for PointNet, we can generate 379 adversarial examples. Note that the value here is not comparable with other voxel representations, since in this case we are changing point coordinates instead of occupancy levels.
We also show some qualitative adversarial examples in Figure 6 and Figure 7. The voxels are scaled according to their occupancy level. In Figure 6 the input and the target classes are the same for each row, where the target labels are cup, keyboard, bench, and door from top to bottom. In Figure 7 we use the same input model but set different target class for each column. The differences of adversarial examples for VoxMVCNN are almost imperceptible, as the is too small and the model is easy to be fooled. The classification accuracy of each model is shown in Table 5 as reference.
|# Adversarial examples||379||370||290||399|
|0.045 0.054||0.061 0.057||0.041 0.027||0.006 0.006|
|Per inst. acc. (%)||86.8||85.6||84.4||88.2|
3.6 Comparison to Prior Work
We compare our MVCNN result with prior works in Table 6. The results are grouped by the input type. For multi-view image-based models, our MVCNN achieves 95.0% per instance accuracy, which is the best result between all competing approaches. Rotation Net , which predicts the object pose and class labels at the same time, is 0.2% worse than our MVCNN. Dominant Set Clustering  works by clustering image features across views and pooling within the clusters. Its performance is 1.0% lower than RotationNet. MVCNN-MultiRes  is the most related to our work. They showed that MVCNN with sphere rendering can achieve better accuracy than voxel-based network, suggesting that there is room for improvement in VoxNet. Our VoxNet experiment corroborates to this conclusion. Furthermore, MVCNN-MultiRes uses images in multiple resolutions to boost its performance.
For point-based methods, PointNet  and DeepSets  use symmetric functions, i.e. max/mean pooling layers, to generate permutation invariant point cloud descriptions. DynamicGraph  builds upon PointNet by performing symmetric function aggregations on points within a neighborhood, instead of the whole set. Such neighborhood is computed dynamically by building nearest neighbor graph using distances defined in the feature space. Similarly, Kd-Networks  work by precomputing a graph induced by a binary spatial partitioning tree and use it to apply local linear operations. The best point-based method is 2.8% less accurate then our MVCNN.
For voxel-based methods, VoxNet  and 3DShapeNets  work by applying 3D convolutions on voxels. ORION  is based on VoxNet but predicts the orientation in addition to class labels. OctNet  and O-CNN  are able to process higher resolution grids by using an octree representation. FusionNet  combines the voxel and image representations to improve the performance to 90.8%. Our experiments in Section 3.4 suggests that since MVCNN already has 95.0% accuracy the benefit of combining different representations is not effective.
|Model||Input||Per class acc.||Per ins. acc.|
|Dominant Set Clustering ||-||93.8|
|VRN Single ||Voxels||-||91.3|
We investigated on different representations and models for 3D shape classification task, which resulted in a new state-of-the-art on the ModelNet40 benchmark. We analyzed the generalization of MVCNN, PointNet, and VoxNet by varying the number of training examples. Our results indicate that multiview-based methods provide better generalizability and outperform other methods on the full dataset even without ImageNet pretraining or training with binary silhouettes. We also analyzed cross-modal distillation and showed improvements on VoxNet and PointNet by distilling knowledge from MVCNN. Finally, we analyzed the robustness of the models to adversarial perturbations and concluded that point-based networks are more robust to point perturbations, while multi-view and voxel-based networks can be fooled by imperceptible perturbations.
We acknowledge support from NSF (#1617917, #1749833) and the MassTech Collaborative grant for funding the UMass GPU cluster.
-  Aytar, Y., Vondrick, C., Torralba, A.: Soundnet: Learning sound representations from unlabeled video. In: Neural Information Processing Systems (NIPS) (2016)
-  Blender Online Community: Blender - a 3D modelling and rendering package. Blender Foundation, Blender Institute, Amsterdam, http://www.blender.org
-  Brock, A., Lim, T., Ritchie, J.M., Weston, N.: Generative and discriminative voxel modeling with convolutional neural networks. arXiv preprint arXiv:1608.04236 (2016)
-  Buciluǎ, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (2006)
-  Gadhela, M., Maji, S., Wang, R.: Unsupervised 3d shape induction from 2d views of multiple objects. In: International Conference on 3D Vision 2018 (2017)
-  Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR) (2016)
-  Hegde, V., Zadeh, R.: Fusionnet: 3d object classification using multiple data representations. arXiv preprint arXiv:1607.05695 (2016)
-  Hinton, G., Vinyals, O., Dean, J.: Distilling knowledge in a neural network. In: Neural Information Processing Systems (NIPS) (2014)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML) (2015)
-  Kalogerakis, E., Averkiou, M., Maji, S., Chaudhuri, S.: 3d shape segmentation with projective convolutional networks (2017)
-  Kanezaki, A., Matsushita, Y., Nishida, Y.: Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. arXiv preprint arXiv:1603.06208 (2016)
-  Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
-  Klokov, R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In: International Conference on Computer Vision (ICCV) (2017)
-  Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
Maturana, D., Scherer, S.: Voxnet: A 3d convolutional neural network for real-time object recognition. In: IEEE International Conference on Intelligent Robots and Systems (IROS) (2015)
-  Meagher, D.J.: Octree encoding: A new technique for the representation, manipulation and display of arbitrary 3-d objects by computer. Electrical and Systems Engineering Department Rensseiaer Polytechnic Institute Image Processing Laboratory (1980)
-  Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition (CVPR) (2015)
-  Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
-  Phong, B.T.: Illumination for computer generated pictures. Communications of the ACM 18(6), 311–317 (1975)
-  Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In: Neural Information Processing Systems (NIPS) (2017)
-  Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.: Volumetric and multi-view cnns for object classification on 3d data. In: Computer Vision and Pattern Recognition (CVPR) (2016)
-  Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.: Volumetric and multi-view cnns for object classification on 3d data. In: Computer Vision and Pattern Recognition (CVPR) (2016)
-  Riegler, G., Ulusoy, A.O., Geiger, A.: Octnet: Learning deep 3d representations at high resolutions. In: Computer Vision and Pattern Recognition (CVPR) (2017)
-  Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV) 115(3), 211–252 (2015)
-  Sedaghat, N., Zolfaghari, M., Amiri, E., Brox, T.: Orientation-boosted voxel nets for 3d object recognition. British Machine Vision Conference (BMVC) (2017)
-  Sfikas, K., Theoharis, T., Pratikakis, I.: Exploiting the panorama representation for convolutional neural network classification and retrieval. In: Eurographics Workshop on 3D Object Retrieval (2017)
-  Shen, Y., Feng, C., Yang, Y., Tian, D.: Neighbors do help: Deeply exploiting local structures of point clouds. arXiv preprint arXiv:1712.06760 (2017)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
-  Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.G.: Multi-view convolutional neural networks for 3d shape recognition. In: International Conference on Computer Vision (ICCV) (2015)
Su, H., Qi, C., Mo, K., Guibas, L.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Computer Vision and Pattern Recognition (CVPR) (2017)
-  Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
-  Wang, C., Pelillo, M., Siddiqi, K.: Dominant set clustering and pooling for multi-view 3d object recognition. In: British Machine Vision Conference (BMVC) (2017)
-  Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (SIGGRAPH) 36(4) (2017)
-  Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829 (2018)
-  Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Computer Vision and Pattern Recognition (CVPR) (2015)
-  Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R.R., Smola, A.J.: Deep sets. In: Neural Information Processing Systems (NIPS) (2017)
Appendix 0.A A Deeper Look at 3D Shape Classifiers:
In the supplementary material we include (i) accuracy as a function of the number of views in a MVCNN model, (ii) examples of rendered images from 12 views, (iii) enhanced adversarial examples, and (iv) class confusion matrices of different models.
0.a.1 Effect of number of views for MVCNN
We investigate the effect of the number of views on the accuracy of the model. The results using different number of views are shown in Figure 8. The views are selected radially symmetrically, i.e., the object is successively rotated by for generating one of views. Even with one view the MVCNN outperforms both VoxNet and PointNet. The performance is close to optimal with only four views per model.
0.a.2 Comparison of rendering
0.a.3 Enhanced adversarial examples
Figure 10 shows the adversarial examples of VoxNet and VoxMVCNN with the adversarial noise enhanced for improved visibility. Since the are small for adversarial examples, we raise the noise to . These two rows correspond to the last two columns in Figure 6 in the paper.
0.a.4 Confusion matrices
Here we compare the class confusion matrices of different models. Figure 11
shows the differences of the confusion matrices between (a) MVCNN and PointNet, (b) MVCNN and VoxNet, and (c) PointNet and VoxNet. The difference of confusion matrices (A-B) is obtained by subtracting the confusion matrix of method B from method A. Higher values in main diagonals and lower values in off-diagonals indicate method A has better performance than method B. In Figure11(a-b) we show that MVCNN outperforms PointNet and VoxNet. There are only a few confusions where MVCNN is worse then VoxNet or PointNet, e.g. . These classes are hard to be distinguished even for humans. Figure 11(c) shows PointNet has better performance than VoxNet in some cases, e.g. , but also has many confusions in other cases.