canonical-capsules
None
view repo
We propose an unsupervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. In doing so, we require neither classification labels nor manually-aligned training datasets to train. Yet, by learning an object-centric representation in an unsupervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, registration, and unsupervised classification. We will release the code and dataset to reproduce our results as soon as the paper is published.
READ FULL TEXT VIEW PDFNone
Understanding objects is one of the core problems of computer vision
[31, 15, 35]. While this task has traditionally relied on large annotated datasets [38, 21], unsupervised approaches [7] have emerged to remove the need for labels. Recently, researchers have attempted to extend these methods to work on 3D point clouds [53], but the field of unsupervised 3D learning remains relatively uncharted. Conversely, researchers have been extensively investigating 3D deep representations for shape auto-encoding (also at times referred to as reconstruction) [55, 19, 32, 17], making one wonder whether these discoveries can now benefit from unsupervised learning for tasks
other than auto-encoding.Importantly, these recent methods for 3D deep representation learning are not entirely unsupervised. Whether using point clouds [55], meshes [19], or implicits [32], they owe their success to the inductive bias of the training dataset. Specifically, all 3D models in the popular ShapeNet [5] dataset are “object-centric” – they are pre-canonicalized to a unit bounding box, and, even more importantly, with an orientation that synchronizes object semantics to Euclidean frame axes (e.g. airplane cockpit is always along , car wheels always touch ). Differentiable 3D decoders are heavily affected by the consistent alignment of their output with an Euclidean frame [10, 17]: local-to-global transformations cannot be easily learnt by fully connected layers. Thus, these methods fail in the absence of pre-alignment.
With our method, Canonical Capsules, we tackle the problem of training 3D deep representation in a truly unsupervised fashion through a capsule formulation [22]. In capsule networks, a scene is perceived via its decomposition into part hierarchies, and each part is represented with a (pose, descriptor) pair: \⃝raisebox{-0.6pt}{1} The capsule pose specifies the frame of reference of a part, and hence should be transformation equivariant; \⃝raisebox{-0.6pt}{2} The capsule descriptor specifies the appearance of a part, and hence should be transformation invariant. In our evaluation we show how these representations are effective in a variety of tasks, but “can we train them effectively in an unsupervised setting?”
In Canonical Capsules, we propose a novel architecture to compute a K-part decomposition of a point cloud via an attention mechanism; see Figure 1. Our network is trained by feeding pairs of a randomly rotated copies of the same shape. We aggregate the K parts into K keypoints, so that keypoints across shapes are in semantic one-to-one correspondence. Equivariance is then enforced by requiring the two keypoint sets to only differ by the known (relative) transformation; to realize invariance, we simply ask that the descriptors of the two instances match. Note that we train such a decomposition in a fully unsupervised fashion, and that the network only ever sees randomly rotated point clouds.
In Canonical Capsules, we exploit our decomposition to recover a canonical frame that allows unsupervised “object-centric” learning of 3D deep representations without requiring a semantically aligned dataset. We achieve this task by regressing canonical capsule poses from capsule descriptors via a deep network, and computing a canonicalizing transformation by solving, yet again, a shape-matching problem. This not only allows more effective shape auto-encoding, but our experiments confirm this results in a latent representation that is more effective in unsupervised classification tasks. Note that, like our decomposition, our canonicalizing transformations are also learnt in a fully unsupervised fashion, by only training on randomly rotated point clouds.
In summary, in this paper we:
[leftmargin=*]
propose an architecture for unsupervised learning with 3D point clouds based on capsules;
demonstrate how capsule decompositions can be learnt via straightforward transformation augmentation;
enable unsupervised learning to be object-centric by introducing a learned canonical frame of reference;
achieve state-of-the-art performance in unsupervised 3D point cloud registration, auto-encoding/reconstruction, and unsupervised classification.
Our technique proposes a capsule architecture for learning of 3D representations that are usable across a range of unsupervised tasks: from classification [55] to reconstruction and registration.
Convolutional Neural Networks lack equivariance to rigid transformations, despite their pivotal role in describing the structure of the 3D scene behind a 2D image. One promising approach to overcome this shortcoming is to add equivariance under a group action in each layer. In [44] an -equivariant network is introduced by interpreting each layer as a sum of
representations and convolutions as tensor product of different representations. To the best of our knowledge, these models have not been extended to more general rigid transformations. In our work, we remove the need for a global
-equivariant network by canonicalizing the input.Capsule Networks [22], on the other hand, have been proposed to overcome this issue towards a relational and hierarchical understanding of natural images. Techniques such as Dynamic Routing [37, 47] and EM-algorithms [24] have been proposed as potential architectures, and have found applications ranging from medical imaging [1] to language understanding [57]. Of particular relevance to our work, are methods that apply capsule networks to 3D input data [58, 59, 41], but note these methods are not unsupervised, as they either rely on classification supervision [59], or on datasets that present a significant inductive bias in the form of pre-alignment [58]. In this paper, we take inspiration from the recent Stacked Capsule Auto-Encoders [29], where the authors have shown how capsule-style reasoning is effective as far as primary capsules can be trained in an unsupervised fashion – by only using unsupervised reconstruction losses. The natural question, which we answer in this paper, is “how can we engineer networks that generate primary 3D capsules in an unsupervised fashion?”
Reconstructing 3D objects requires effective inductive biases about 3D vision and 3D geometry. When the input is images, the core challenge is how to encode 3D projective geometry concepts into the model. This can be achieved by explicitly modeling multi-view geometry [27], by attempting to learn it [14], or by hybrid solutions [54]. But even when input is 3D, there are still significant challenges. It is still not clear which is the 3D representation
that is most amenable to deep learning. Researchers proposed the use of meshes
[49, 30], voxels [50, 51], surface patches [19, 13, 11], and implicit functions [33, 32, 9]. Unfortunately, the importance of geometric structures (i.e. part-to-whole relationships) is often overlooked. Recent works have tried to close this gap by using part decomposition consisting of oriented boxes [45], ellipsoids [18, 17], convex polytopes [12], and grids [4]. However, as previously discussed, most of these still heavily rely on a pre-aligned training dataset; our paper attempts to bridge this gap, allowing learning of structured 3D representations without requiring pre-aligned data.One way to circumvent the requirement of pre-aligned datasets is to rely on methods capable of registering a point cloud into a canonical frame. The recently proposed CaSPR [36] fulfills this premise, but requires ground-truth canonical point clouds in the form of normalized object coordinate spaces [48] for supervision. Similarly, [20] regresses each view’s pose relative to the canonical pose, but still requires weak annotations in the form of multiple partial views. In contrast to these methods, our solution is completely unsupervised. Several registration techniques based on deep learning have been proposed [52, 56], even using semantic keypoints and symmetry to perform the task [16]. These methods typically register a pair of instances from the same class, but lack the ability to consistently register all instances to a shared canonical frame.
Our network trains on unaligned point clouds as illustrated in Figure 2: we train a network that decomposes point clouds into parts, and enforce invariance/equivariance through a Siamese training setup [43]. We then canonicalize the point cloud to a learnt frame of reference, and perform auto-encoding in this coordinate space. The losses employed to train , , and , will be covered in Section 3.1, while the details of their architecture are in Section 3.2.
In more detail, given a point cloud of points in dimensions, we perturb it with two random transformations to produce point clouds . We then use a shared permutation-equivariant capsule encoder to compute a -fold attention map for capsules, as well as per-point feature map with channels:
(1) |
where we drop the superscript indexing the Siamese branch for simplicity. From these attention masks, we then compute, for the –th capsule its pose parameterized by its location in 3D space, and the corresponding capsule descriptor :
(2) |
Hence, as long as is invariant w.r.t. rigid transformations of , the pose will be transformation equivariant, and the descriptor will be transformation invariant. Note that this simplifies the design (and training) of the encoder , which only needs to be invariant, rather than equivariant [44, 41].
Simply enforcing invariance and equivariance with the above framework is not enough to learn 3D representations that are object-centric, as we lack an (unsupervised) mechanism to bring information into a shared “object-centric” reference frame. Furthermore, the “right” canonical frame is nothing but a convention, thus we need a mechanism that allows the network to make a choice – a choice, however, that must then be consistent across all objects. For example, a learnt canonical frame where the cockpit of airplanes is consistently positioned along is just as good as a canonical frame where it is positioned along the axis. To address this, we propose to link the capsule descriptors to the capsule poses in canonical space, that is, we ask that objects with similar appearance to be located in similar Euclidean neighborhoods in canonical space. We achieve this by regressing canonical capsules poses (i.e. canonical keypoints) using the descriptors via a fully connected deep network :
(3) |
As fully connected layers are biased towards learning low-frequency representations [26], this regressor also acts as a regularizer that enforces semantic locality.
Finally, in the learnt canonical frame of reference, to train the capsule descriptors via auto-encoding, we reconstruct the point clouds with per-capsule decoders :
(4) |
where denotes the union operator. The canonicalizing transformation can be readily computed by solving a shape-matching problem [40], thanks to the property that our capsule poses and regressed keypoints are in one-to-one correspondence:
(5) |
While the reconstruction in (4) is in canonical frame, note it is trivial to transform the point cloud back to the original coordinate system after reconstruction, as the transformation is available.
As common in unsupervised methods, our framework relies on a number of losses that control the different characteristics we seek to obtain in our representation. Note how all these losses are unsupervised, and require no labels. We organize the losses according to the portion of the network they supervise: decomposition, canonicalization, and reconstruction.
While a transformation invariant encoder architecture should be sufficient to realize the desired equivariant/invariant properties, this does not prevent the encoder from producing trivial solutions/decompositions once trained. As capsule poses should be transformation equivariant, the poses of the two rotation augmentations and should only differ by the (known) relative transformation:
(6) |
Conversely, capsule descriptors should be transformation invariant, and as the two input points clouds are of the same object, the corresponding capsule descriptors should be identical:
(7) |
We further regularize the capsule decomposition to ensure each of the heads roughly represent the same “amount” of the input point cloud, hence preventing degenerate (zero attention) capsules. This is achieved by penalizing the attention variance:
(8) |
where denotes the total attention exerted by the k-th head on the point cloud.
Finally, to facilitate the training process, we ask for capsules to learn a localized representation of geometry. We express the spatial extent of a capsule by computing first-order moments of the represented points with respect to the capsule pose
:(9) |
To train our canonicalizer , we relate the predicted capsule poses to regressed canonical capsule poses via the optimal rigid transformation from (5):
(10) |
Recall that and are obtained through a purely differentiable process. Thus, this loss is forcing the aggregated pose to agree with the one that goes through the regression path, . Here, is regressed solely from the set of capsule descriptors, hence similar shapes will result in similar canonical keypoints, and the coordinate system of is one that employs Euclidean space to encode semantics.
We briefly summarize our implementation details, including the network architecture; for further details, please refer to the supplementary material.
Our architecture is based on the one suggested in [42]
: a pointnet-like architecture with residual connections and attentive context normalization. We utilize Batch Normalization instead of the Group Normalization, which trained faster in our experiments. We further extend their method to have
multiple attention maps, where each attention map corresponds to a capsule.We simply concatenate the descriptors and apply a series of fully connected layers with ReLU activation to regress the
capsule locations. At the output layer, we use a linear activation, and further subtract the mean of the outputs to make our regressed locations zero-centered in the canonical frame.As our descriptors are only approximately rotation invariant (via augmentation), we found it useful to re-extract the capsule descriptors after canonicalization. Specifically, we compute with the same encoder setup, but with instead of and use it to compute ; we validate this empirically in Section 4.5.
We first discuss the experimental setup in Section 4.1, and then validate our method on a variety of tasks: auto-encoding (Section 4.2), registration (Section 4.3), and unsupervised classification (Section 4.4). Importantly, while the task differs, our learning process remains the same: we learn capsules by reconstructing objects in a learnt canonical frame. We conclude with ablation studies in Section 4.5. For additional ablations and qualitative results, please refer to our supplementary material.
To evaluate our method, we rely on the ShapeNet (Core) dataset [5]. We follow the category choices from AtlasNetV2 [13], using the airplane and chair classes for single-category experiments, while for multi-category experiments we use all classes: airplane, bench, cabinet, car, chair, monitor, lamp, speaker, firearm, couch, table, cellphone, and watercraft. To make our results most compatible with those reported in the literature, we also use the same splits as in AtlasNetV2 [13]: shapes in the train, and shapes in the test set.^{1}^{1}1Note the numbers are slightly smaller than in [13], as they ignore the fact that the last batch is smaller than the rest (i.e. incomplete). Unless noted otherwise, we randomly sample points from the object surface for each shape to create our 3D point clouds.
As discussed in the introduction, ShapeNet (Core) contains substantial inductive bias in the form of consistent semantic alignment. To remove this bias, we create random transformations, and apply them to each point cloud. We first generate uniformly sampled random rotations, and add uniformly sampled random translations within the range , where the bounding volume of the shape ranges in . Note the relatively limited translation range is chosen to give state-of-the-art methods a chance to compete with our solution. We then use the relative transformation between the point clouds extracted from this ground-truth transformation to evaluate our methods. We refer to this unaligned version of the ShapeNet Core dataset as the unaligned setup, and using the vanilla ShapeNet Core dataset as the aligned setup. For the aligned setup, as there is no need for equivariance adaptation, we simply train our method without the random transformations, and so and are not used. This setup is to simply demonstrate how Canonical Capsules would perform in the presence of a dataset bias.
We emphasize here that a proper generation of random rotation is important. While some existing works have generated them by uniformly sampling the degrees of freedom of an Euler-angle representation, this is known to be an incorrect way to sample random rotations
[2], leading to biases in the generated dataset; see supplementary material.For all our experiments we use the Adam optimizer [28] with an initial learning rate of and decay rate of . We train for epochs for the setup to match the AtlasNetV2 [13] original setup. For the setting, as the problem is harder, we train for a longer number of epochs. Unless stated otherwise, we use capsules and capsule descriptors of dimension . We train three models with our method: two that are single-category (, for airplane and chairs), and one that is multi-category (, all 13 classes). To set the weights for each loss term, we rely on the reconstruction performance (CD) in the training set. We set weights to be one for all terms except for (5) and (). In the aligned case, because and are not needed (always zero), we reduce the weights for the other decomposition losses by ; to and to .
We evaluate the performance of our method for the task that was used to train the network – reconstruction / auto-encoding – against two baselines (trained in both single-category and multi-category variants):
[leftmargin=*]
AtlasNetV2 [13], the state-of-the-art auto-encoder which utilizes a multi-head patch-based decoder;
3D-PointCapsNet [58], an auto-encoder for 3D point clouds that utilize a capsule architecture.
We do not compare against [41], as unfortunately no code is publicly available, and the paper is yet to be peer-reviewed.
We achieve state-of-the-art performance in both the aligned and unaligned settings. The wider margin in the unaligned setup indicates tackling the realistic, unbiased case degrades both AtlasNetV2 [13] and 3D-PointCapsNet [58] more than our method. We note that the results in this table differ slightly from what is reported in the original papers as we use 1024 points to speed-up our experiments; However, we show in Section 4.5 that the same trends hold regardless of the number of points, and match with what is reported in the original papers when 2500 points are used.
We illustrate our decomposition-based reconstruction of 3D point clouds, as well as the reconstructions of 3D-PointCapsNet [58] and AtlasNetV2 [13]. As shown, even in the setup, our method is able to provide semantically consistent capsule decompositions – the wings of the airplane have consistent colours, and when aligned in the canonical frame, the different airplane instances are well-aligned. Compared to AtlasNetV2 [13] and 3D-PointCapsNet [58], the reconstruction quality is also visibly improved: we better preserve details along the engines of the airplane, or the thin structures of the bench; note also that the decompositions are semantically consistent among the examples we show. Results are better appreciated in our supplementary material, where we visualize the performance as we continuously traverse .
We now evaluate the performance of our method on its capability to register 3D point clouds, and compare against three baselines:
[leftmargin=*]
Deep Closest Points (DCP) [52], a deep learning-based point cloud registration method;
DeepGMR–XYZ [56], where raw XYZ coordinates are used as input instead of rotation-invariant features;
Our method–RRI, a variant of our technique where we use RRI features [6] as the sole input of our architecture.
For our method using RRI features, we follow the DeepGMR training protocol and train for 100 epochs, while for DCP and DeepGMR we use the authors’ official implementation.
We report our results in Table 2. When RRI is used as input, our method is on par with DeepGMR, up to a level where registration is near perfect – alignment differences when errors are in the ballpark are indiscernible. Moreover, our method achieves the best performance when RRI is not used. We note that the performance of Deep Closest Points [52] is not as good as reported in the original paper, as we uniformly draw rotations from . When a sub-portion of is used, a quarter of what we are using, DCP performs relatively well (0.008 in the multi-class experiment). While curriculum learning could be used to enhance the performance of DCP, our technique does not need to rely on these more complex training techniques.
We further note that, while RRI delivers good registration performance, using RRI features cause the learnt canonicalization to fail – does not converge. This hints that RRI features may be throwing away too much information to achieve transformation invariance. Our method using raw XYZ coordinates as input, on the other hand, provides comparable registration performance, and is able to do significantly more than just registration (classification, reconstruction).
Airplane | Chair | Multi | |
---|---|---|---|
Deep Closest Points [52] | 0.318 | 0.160 | 0.131 |
DeepGMR–XYZ [56] | 0.079 | 0.082 | 0.077 |
Our method–XYZ | 0.024 | 0.027 | 0.070 |
DeepGMR–RRI [56] | 0.0001 | 0.0001 | 0.0001 |
Our method–RRI | 0.0006 | 0.0009 | 0.0016 |
Aligned | Unaligned | |||
---|---|---|---|---|
SVM | K-Means | SVM | K-Means | |
AtlasNetV2 | 94.07 | 61.66 | 71.13 | 14.59 |
3D-PointCapsNet | 93.81 | 65.87 | 64.85 | 17.12 |
Our method | 94.21 | 69.82 | 87.17 | 43.86 |
Full | ||||||
---|---|---|---|---|---|---|
CD | 1.08 | 1.09 | 1.09 | 1.16 | 1.45 | 1.61 |
Beyond reconstruction and registration, which are tasks that are directly relevant to the losses used for training, we evaluate the usefulness of our method via a classification task that is not related in any way to the losses used for training. We compute the features from the auto-encoding methods compared in Section 4.2 – AtlasNetV2 [13], 3D-PointCapsNet [58], and our method (where we build features by combining pose with descriptors) – and use them to perform -way classification with two different techniques:
[leftmargin=*]
We perform unsupervised K-Means clustering [3, Ch. 9] and then label each cluster via bipartite matching with the actual labels through the Hungarian algorithm.
Note the former provides an upper bound for unsupervised classification, while better performance on the latter implies that the learnt features are able to separate the classes into clusters that are compact (in an Euclidean sense). We report classification performance in Table 3.
Note how our method provides best results in all cases, and when the dataset is not unaligned the difference is significant. This shows that, while 3D-PointCapsNet and AtlasNetV2 are able to somewhat auto-encode point clouds in the setup, what they learn does not translate well to classification. However, the features learned with Canonical Capsules are more related to the semantics of the object, which helps classification.
The performance gap becomes wider when K-Means is used – even in the
case. This could mean that the features extracted by Canonical Capsules are better suited for other unsupervised tasks, having a feature space that is close to being Euclidean in terms of semantics. The difference is striking in the
setup. We argue that these results emphasize the importance of the capsule framework – jointly learning the invariances and equivariances in the data – is cardinal to unsupervised learning [25, 23].We further analyze different components of Canonical Capsules that affect the performance. To make the computational cost manageable, we perform all experiments in this section with the airplane category, and with the setup, unless otherwise noted.
We first analyze the importance of each loss term – with the exception of – which is necessary for training. Among our losses, the most important appears to be from (8), as distributing approximately same number of points to each capsule enables the model to fully utilize its capacity.
Airplane | All | |
---|---|---|
One shot alignment | 1.12 | 2.27 |
Our method | 1.08 | 2.25 |
A naive alternative to our learnt canonicalizer would be to use one point cloud as a reference point cloud to align to. Using the canonicalizer provides improved reconstruction performance over this naïve approach, removes the dependency on the choice of the reference point cloud, and allows our method to work effectively when dealing with multi-class canonicalization.
We evaluate the effectiveness of the descriptor enhancement strategy described in Section 3.2. We report the reconstruction performance with and without the enhancement. Recomputing the descriptor in canonical frame helps when dealing with the setup. Note that even without this enhancement, our method outperforms the state-of-the-art.
AtlasNetV2 [13] | Ours () | Ours () | |
---|---|---|---|
Aligned | 1.28 | 0.96 | 0.99 |
Unaligned | 2.80 | 2.12 | 1.08 |
Since is inferred by weighted averaging of with the attention map in (2), we also considered directly adding a loss on that enforces invariance instead of a loss on . This variant provides slightly degraded performance of compared to our baseline . We hypothesize that this is because directly supervises the end-goal (capsule pose equivariance) whereas supervising is an indirect one.
To speed-up experiments we have mostly used , but in the table we show that our findings are consistent regardless of the number of points used. Note that the AtlasNetV2 [13] results are very similar to what is reported in the original paper. The slight differences exist due to random subsets that were used in AtlasNetV2 and not fully reproducible.^{2}^{2}2And to a minor bug in the evaluation code ( non deterministic test set creation) that we have already communicated to the authors of [19].
In this paper, we provide an unsupervised framework to train capsule decompositions for 3D point clouds. We rely on a Siamese training setup, circumventing the customary need to train on pre-aligned datasets. Despite being trained in an unsupervised fashion, our representation achieves state-of-the-art performance across auto-encoding/reconstruction, registration and classification tasks. These results are made possible by allowing the network to learn a canonical frame of reference. We interpret this result as giving our neural networks a mechanism to construct a “mental picture” of a given 3D object – so that downstream tasks are executed within an object-centric coordinate frame.
There are many ways in which our work can be extended. As many objects have natural symmetries [13], providing our canonicalizer a way to encode such a prior is likely to further improve the representation. It would be interesting to investigate whether, by providing some part annotations (via few-shot learning), the network could learn to favor decompositions with higher-level semantics (, where the wing of an airplane is a single capsule). Our decomposition has a single layer, and it would be interesting to investigate how to effectively engineer multi-level decompositions [46]; one way could be to over-decompose the input in a redundant fashion (with large ), and use a downstream layers that “selects” the decomposition heads to be used [8]. We would also like to extend our results to more “in-the-wild” 3D computer vision and understand whether learning object-centric representations is possible when incomplete (, single view [33]) data is given in input, when an entire scene with potentially multiple objects is given [39], or where our measurement of the 3D world is a single 2D image [43], or by exploiting the persistence of objects in video.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, NSERC Collaborative Research and Development Grant, Google, Compute Canada, and Advanced Research Computing at the University of British Columbia.
Clusternet: Deep Hierarchical Cluster Network with Rigorously Rotation-Invariant Representation for Point Cloud Analysis.
In Conference on Computer Vision and Pattern Recognition, 2019.International Joint Conference on Artificial Intelligence
, 1985.Stacked Capsule Autoencoders.
In Advances in Neural Information Processing Systems, 2019.DeepGMR: Learning Latent Gaussian Mixture Models for Registration.
In European Conference on Computer Vision, 2020.To facilitate reproduction of our results, we detail our architecture design for the capsule encoder , the decoder and the canonicalizer .
The capsule encoder takes in a point cloud and outputs a -head attention map and the feature map . Specifically, the is composed of 3 residual blocks. Similar to ACNe [42]
, each residual block consists of two hidden MLP layers with 128 neurons each. Each hidden MLP layer is followed by Attentive Context Normalization (ACN)
[42], batch normalization Ioffe15, and ReLU activation. To be able to attend to multiple capsules, we extend ACN layer into the multi-headed ACN.The idea of multi-headed ACN starts from the observation that different capsules may require attending to different parts. Hence, for each MLP layer where we apply ACN, if we denote this layer as , we first train an fully-connected layer that creates a -headed attention map given the of this layer. This is similar to ACN, but instead of a single attention map, we now have . The normalization process is similar to ACN afterwards – utilizing weighted moments of with – but results in normalized outcomes instead of one. We them aggregate these normalization results into one by summing.
Specifically, given the , if we denote the weights and biases to be trained for the attention head to be and we write:
(12) |
We then compute the moments that are used to normalize in ACN, but now for each attention head:
(13) |
which we then use to normalize and aggregate (sum) to get our final normalized feature map:
(14) |
where is a very small value to avoid numerical instability.
The decoder is composed of per-capsule decoders . Each per-capsule decoder maps the capsule in canonical frame (, ) to a group of points which should correspond to a part of the entire object. We then obtain the auto-encoded (reconstructed) point clouds by collecting the outputs of per-capsule decoders and taking their union as in (4).
Specifically, each consists of 3 hidden MLP layers of (1280, 640, 320) neurons, each followed by batch normalization and a ReLU activation. At the output layer, we use a fully connected layer, followed by the Tanh activation to regress the coordinates for each point. Similarly to AtlasNetV2 [13], additionally receives a set of trainable grids of size () and deforms them into , based on . Finally, as each output point of should be relative with respect to the capsule pose, we translate the generated points by the corresponding capsule’s canonicalized pose.
The canonicalizer learns to regress the canonical pose for each capsule from their descriptors . In order to do so, we concatenate into a single, global descriptor, which we then feed into a fully connected layer with neurons and a ReLU activation, followed by an additional fully connected layer with neurons, creating -dimensional outputs – the poses. We do not apply any activation on the second layer. To make zero-centered in the canonical frame, we further subtract the mean of the outputs.
For completeness, we further show qualitative results for auto-encoding on an aligned dataset, the common setup in prior work. As shown in Figure 4, our method provides best reconstruction performance even in this case; for quantiative results, see Table 1. Interestingly, while our decoder architecture is very similar to AtlasNetV2 [13], our reconstructions are of higher quality; our methods provides finer details at the propellers on the airplane, at the handle of the firearm, and at the back of the chair. This further supports the effectiveness of our capsule encoding.
To verify how the number of capsules affect performance, we test with varying number of capsules; 5, 10, and 20. As the number of capsules is the only factor that we wish the vary, we keep everything else identical, including the representation power by reducing the dimension of the descriptor as more capsules are used. For example, with 10 capsules we use a 128-dimensional descriptor, with 20 we use 64, and with 5, we use 256. Our experimental results show that representation with 10 capsules achieves the best performance. Note that our method, even with the sub-optimal number of capsules, still outperforms compared methods by a large margin.
AtlasNetV2 [13] | 5 capsules | 10 capsules | 20 capsules |
2.80 | 1.25 | 1.08 | 1.13 |
Lastly, we revisit how rotations are randomly sampled to generate the augmentations used by Siamese training. Uniform sampling of Euler angles (i.e., yaw,pitch,roll) leads to a non-uniform coverage of the SO(3) manifold as shown in Figure 5 (a). Due to this non-uniformity, the reconstruction quality is biased with respect to test-time rotations; see Figure 5 (b). Instead, by properly sampling [2] the reconstruction performance is much more uniform across the manifold; see see Figure 5 (c) In our experiments, this leads to a significant difference in auto-encoding performance; with proper uniform sampling vs with the Euclidean random sampling.
ieee_fullname macros,main