1 Introduction
Capsule Networks (CapsNets) represent visual features using groups of neurons. Each group (called a “capsule”) encodes a feature and represents one visual entity. Grouping all the information about one entity into one computational unit makes it easy to incorporate priors such as “a part can belong to only one whole” by routing the entire part capsule to its parent whole capsule. Routing is mutually exclusive among parents, which ensures that one part cannot belong to multiple parents. Therefore, capsule routing has the potential to produce an interpretable hierarchical parsing of a visual scene. Such a structure is hard to impose in a typical convolutional neural network (CNN). This hierarchical relationship modeling has spurred a lot of interest in designing capsules and their routing algorithms
(Sabour et al., 2017; Hinton et al., 2018; Wang & Liu, 2018; Zhang et al., 2018; Li et al., 2018; Rajasegaran et al., 2019; Kosiorek et al., 2019).In order to do routing, each lowerlevel capsule votes for the state of each higherlevel capsule. The higherlevel (parent) capsule aggregates the votes, updates its state, and uses the updated state to explain each lowerlevel capsule. The ones that are wellexplained end up routing more towards that parent. This process is repeated, with the vote aggregation step taking into account the extent to which a part is routed to that parent. Therefore, the states of the hidden units and the routing probabilities are inferred in an iterative way, analogous to the Mstep and Estep, respectively, of an ExpectationMaximization (EM) algorithm. Dynamic Routing
(Sabour et al., 2017) and EMrouting (Hinton et al., 2018) can both be seen as variants of this scheme that share the basic iterative structure but differ in terms of details, such as their capsule design, how the votes are aggregated, and whether a nonlinearity is used.We introduce a novel routing algorithm, which we called Inverted DotProduct Attention Routing
. In our method, the routing procedure resembles an inverted attention mechanism, where dot products are used to measure agreement. Specifically, the higherlevel (parent) units compete for the attention of the lowerlevel (child) units, instead of the other way around, which is commonly used in attention models. Hence, the routing probability directly depends on the agreement between the parent’s pose (from the previous iteration step) and the child’s vote for the parent’s pose (in the current iteration step). We also propose two modifications for our routing procedure – (1) using Layer Normalization
(Ba et al., 2016) as normalization, and (2) doing inference of the latent capsule states and routing probabilities jointly across multiple capsule layers (instead of doing it layerwise). These modifications help scale up the model to more challenging datasets.Our model achieves comparable performance as the stateoftheart convolutional neural networks (CNNs), but with much fewer parameters, on CIFAR10 (95.14% test accuracy) and CIFAR100 (78.02% test accuracy). We also introduce a challenging task to recognize single and multiple overlapping objects simultaneously. To be more precise, we construct the DiverseMultiMNIST dataset that contains both singledigit and overlappingdigits images. With the same number of layers and the same number of neurons per layer, the proposed CapsNet has better convergence than a baseline CNN. Overall, we argue that with the proposed routing mechanism, it is no longer impractical to apply CapsNets on realworld tasks. We will release the source code to reproduce the experiments.
2 Capsule Network Architecture
An example of our proposed architecture is shown in Figure 1
. The backbone is a standard feedforward convolutional neural network. The features extracted from this network are fed through another convolutional layer. At each spatial location, groups of
channels are made to create capsules (we assume a dimensional pose in a capsule). LayerNorm is then applied across thechannels to obtain the primary capsules. This is followed by two convolutional capsule layers, and then by two fullyconnected capsule layers. In the last capsule layer, each capsule corresponds to a class. These capsules are then used to compute logits that feed into a softmax to computed the classification probabilities. Inference in this network requires a feedforward pass up to the primary capsules. After this, our proposed routing mechanism (discussed in the next section) takes over.
In prior work, each capsule has a pose and some way of representing an activation probability. In Dynamic Routing CapsNets (Sabour et al., 2017)
, the pose is represented by a vector and the activation probability is implicitly represented by the norm of the pose. In EM Routing CapsNets
(Hinton et al., 2018), the pose is represented by a matrix and the activation probability is determined by the EM algorithm. In our work, we consider a matrixstructured pose in a capsule. We denote the capsules in layer as and the th capsule in layer as . The pose in a vector form and will be reshaped to when representing it as a matrix, where is the number of hidden units grouped together to make capsules in layer . The activation probability is not explicitly represented. By doing this, we are essentially asking the network to represent the absence of a capsule by some special value of its pose.3 Inverted DotProduct Attention Routing
The proposed routing process consists of two steps. The first step computes the agreement between lowerlevel capsules and higherlevel capsules. The second step updates the pose of the higherlevel capsules.
Step 1: Computing Agreement: To determine how capsule in layer () agrees with capsule in layer (), we first transform the pose to the vote for the pose . This transformation is done using a learned transformation matrix :
(1) 
where the matrix if the pose has a vector structure and (requires ) if the pose has a matrix structure. Next, the agreement () is computed by the dotproduct similarity between a pose and a vote :
(2) 
The pose is obtained from the previous iteration of this procedure, and will be set to initially.
Step 2: Computing Poses: The agreement scores are passed through a softmax function to determine the routing probabilities :
(3) 
where is an inverted attention score representing how higherlevel capsules compete for attention of lowerlevel capsules. Using the routing probabilities, we update the pose for capsule in layer from all capsules in layer :
(4) 
We adopt Layer Normalization (Ba et al., 2016) as the normalization, which we empirically find it to be able to improve the convergence for routing. The routing algorithm is summarized in Procedure 1 and Figure 2.
4 Inference and Learning
To explain how inference and learning are performed, we use Figure 1 as an example. Note that the choice of the backbone, the number of capsules layers, the number of capsules per layer, the design of the classifier may vary for different sets of experiments. We leave the discussions of configurations in Sections 5 and 6, and in the Appendix.
4.1 Inference
For ease of exposition, we decompose a CapsNet into precapsule, capsule and postcapsule layers.
PreCapsule Layers: The goal is to obtain a backbone feature from the input image . The backbone model can be either a single convolutional layer or ResNet computational blocks (He et al., 2016).
Capsule Layers: The primary capsules are computed by applying a convolution layer and Layer Normalization to the backbone feature . The nonprimary capsules layers are initialized to be zeros ^{1}^{1}1 As compared to 0 initialization, we observe that a random initialization leads to similar converged performance but slower convergence speed. We also tried to learn biases for capsules’ initialization, which results in similar converged performance and same convergence speed. As a summary, we initialize the capsule’s value to 0 for simplicity. . For the first iteration, we perform one step of routing sequentially in each capsule layer. In other words, the primary capsules are used to update their parent convolutional capsules, which are then used to update the next higherlevel capsule layer, and so on. After doing this first pass, the rest of the routing iterations are performed concurrently. Specifically, all capsule layers look at their preceding lowerlevel capsule layer and perform one step of routing simultaneously. This procedure is an example of a parallelintime inference method. We call it “concurrent routing” as it concurrently performs routing between capsules layers per iteration, leading to better parallelism. Figure 3 illustrates this procedure from routing iteration to . It is worth noting that, our proposed variant of CapsNet is a weighttied concurrent routing architecture with Layer Normalization, which Bai et al. (2019) empirically showed could converge to fixed points.
Previous CapsNets (Sabour et al., 2017; Hinton et al., 2018) used sequential layerwise iterative routing between the capsules layers. For example, the model first performs routing between layer and layer for a few iterations. Next, the model performs routing between layer and
for a few iterations. When unrolled, this sequential iterative routing defines a very deep computational graph with a single path going from the inputs to the outputs. This deep graph could lead to a vanishing gradients problem and limit the depth of a CapsNet that can be trained well, especially if any squashing nonlinearities are present. With concurrent routing, the training can be made more stable, since each iteration has a more cumulative effect.
PostCapsule Layers: The goal is to obtain the predicted class logits from the last capsule layer (the class capsules) . In our CapsNet, we use a linear classifier for class in class capsules: . This classifier is shared across all the class capsules.
4.2 Learning
We update the parameters
by stochastic gradient descent. For multiclass classification, we use multiclass crossentropy loss. For multilabel classification, we use binary crossentropy loss. We also tried Margin loss and Spread loss which are introduced by prior work
(Sabour et al., 2017; Hinton et al., 2018). However, these losses do not give us better performance against crossentropy and binary crossentropy losses.4.3 Comparisons with Existing CapsNet Models
Having described our model in detail, we can now place the model in the context of previous work. In the following table, we list the major differences among different variants of CapsNets.
5 Experiments on CIFAR10 and CIFAR100
Method  Backbone  Test Accuracy (# of parameters)  
CIFAR10  CIFAR100  
Dynamic Routing (Sabour et al., 2017)  simple  84.08% (7.99M)  56.96% (31.59M) 
EM Routing (Hinton et al., 2018)  simple  82.19 (0.45M)  37.73% (0.50M) 
Inverted DotProduct Attention Routing (ours)  simple  85.17 (0.56M)  57.32% (1.46M) 
Dynamic Routing (Sabour et al., 2017)  ResNet  92.65% (12.45M)  71.70% (36.04M) 
EM Routing (Hinton et al., 2018)  ResNet  92.15% (1.71M)  58.08% (1.76M) 
Inverted DotProduct Attention Routing (ours)  ResNet  95.14% (1.83M)  78.02% (2.80M) 
Baseline CNN (simple)  87.10% (18.92M)  62.30% (19.01M)  
ResNet18 (He et al., 2016)  95.11% (11.17M)  77.92% (11.22M) 
CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009) consist of small realworld color images with for training and for evaluation. CIFAR10 has classes, and CIFAR100 has classes. We choose these natural image datasets to demonstrate our method since they correspond to a more complex data distribution than digit images.
Comparisons with other CapsNets and CNNs: In Table 1, we report the test accuracy obtained by our model, along with other CapsNets and CNNs. Two prior CapsNets are chosen: Dynamic Routing CapsNets (Sabour et al., 2017) and EM Routing CapsNets (Hinton et al., 2018)
. For each CapsNet, we apply two backbone feature models: simple convolution followed by ReLU nonlinear activation and a ResNet
(He et al., 2016) backbone. For CNNs, we consider a baseline CNN with convolutional layers followed by fullyconnected classifier layer. ResNet18 is selected as a representative of SOTA CNNs. See Appendix A.1 for detailed configurations.First, we compare previous routing approaches against ours. In a general trend, the proposed CapsNets perform better than the Dynamic Routing CapsNets, and the Dynamic Routing CapsNets perform better than EM Routing CapsNets. The performance differs more on CIFAR100 than on CIFAR10. For example, with simple convolutional backbone, EM Routing CapsNet can only achieve test accuracy while ours can achieve . Additionally, for all CapsNets, we see improved performance when replacing a single convolutional backbone with ResNet backbone. This result is not surprising since ResNet structure has better generalizability than a single convolutional layer. For the number of parameters, ours and EM Routing CapsNets have much fewer as compared to Dynamic Routing CapsNets. The reason is due to different structures of capsule’s pose. Ours and EM Routing CapsNets have matrixstructure poses, and Dynamic Routing CapsNets have vectorstructure poses. With matrix structure, weights between capsules are only with being pose’s dimension; with vector structure, weights are . To conclude, combining the proposed Inverted DotProduct Attention Routing with ResNet backbone gives us both the advantages of a low number of parameters and high performance.
Second, we discuss the performance difference between CNNs and CapsNets. We see that, with a simple backbone (a single convolutional layer), it is hard for CapsNets to reach the same performance as CNNs. For instance, our routing approach can only achieve test accuracy on CIFAR100 while the baseline CNN achieves . However, with a SOTA backbone structure (ResNet backbone), the proposed routing approach can reach competitive performance ( on CIFAR10) as compared to the SOTA CNN model (ResNet18 with on CIFAR10).
Convergence Analysis: In Figure 4
, top row, we analyze the convergence for CapsNets with respect to the number of routing iterations. The optimization hyperparameters are chosen optimally for each routing mechanism. For Dynamic Routing CapsNets
(Sabour et al., 2017), we observe a mild performance drop when the number of iterations increases. For EM Routing CapsNets (Hinton et al., 2018), the bestperformed number of iterations is. Increasing or decreasing this number severely hurts the performance. For our proposed routing mechanism, we find a positive correlation between performance and number of routing iterations. The performance variance is also the smallest among the three routing mechanisms. This result suggests our approach has better optimization and stable inference. However, selecting a larger iteration number may not be ideal since memory usage and inference time will also increase (shown in the bottom right in Figure
4). Note that, we observe sharp performance jitters during training when the model has not converged (especially when the number of iterations is high). This phenomenon is due to applying LayerNorm on a lowdimensional vector. The jittering is reduced when we increase the pose dimension in capsules.Ablation Study: Furthermore, we inspect our routing approach with the following ablations: 1) Inverted DotProduct AttentionA: without Layer Normalization; 2) Inverted DotProduct AttentionB: replacing concurrent to sequential iterative routing; and 3) Inverted DotProduct AttentionC: adding activations in capsules ^{2}^{2}2We consider the same kind of capsules activations as described in EM Routing CapsNets (Hinton et al., 2018).. The results are presented in Figure 4 bottom row. When removing Layer Normalization, performance dramatically drops from our routing mechanism. Notably, the prediction becomes uniform when the iteration number increases to . This result implies that the normalization step is crucial to the stability of our method. When replacing concurrent with sequential iterative routing, the positive correlation between performance and iteration number no longer exists. This fact happens in the Dynamic Routing CapsNet as well, which also uses sequential iterative routing. When adding activations to our capsule design, we obtain a performance deterioration. Typically, squashing activations such as sigmoids make it harder for gradients to flow, which might explain this. Discovering the best strategy to incorporate activations in capsule networks is an interesting direction for future work.
6 Experiments on DiverseMultiMNIST
The goal in this section is to compare CapsNets and CNNs when they have the same number of layers and the same number of neurons per layer. Specifically, we would like to examine the difference of the representation power between the routing mechanism (in CapsNets) and the pooling operation (in CNNs). A challenging setting is considered in which objects may be overlapping with each other, and there may be a diverse number of objects in the image. To this end, we construct the DiverseMultiMNIST dataset which is extended from MNIST (LeCun et al., 1998), and it contains both singledigit and two overlapping digit images. The task will be multilabel classification, where the prediction is said to be correct if and only if the recognized digits match all the digits in the image. We plot the convergence curve when the model is trained on images from DiverseMultiMNIST. Please see Appendix B.2 for more details on the dataset and Appendix B.1 for detailed model configurations. The results are reported in Figure 5.
First, we compare our routing method against the Dynamic routing one. We observe an improved performance from the CapsNet to the CapsNet ( to with vectorstructured poses). The result suggests a better viewpoint generalization for our routing mechanism.
Second, we compare baseline CNN against our CapsNet. From the table, we see that CapsNet has better test accuracy compared to CNN. For example, the CapsNet with vectorstructured poses reaches test accuracy, and the baseline CNN reaches test accuracy. In our CNN implementation, we use average pooling from the last convolutional layer to its next fullyconnected layer. We can see that having a routing mechanism works better than pooling. However, one may argue that the pooling operations requires no extra parameter but routing mechanism does, and hence it may not be fair to compare their performance. To address this issue, in the baseline CNN, we replace the pooling operation with a fullyconnected operation. To be more precise, instead of using average pooling, we learn the entire transformation matrix from the last convolutional layer to its next fullyconnected layer. This procedure can be regarded as considering pooling with learnable parameters. After doing this, the number of parameters in CNN increases to , and the corresponding test accuracy is , which is still lower than from the CapsNet. We conclude that, when recognizing overlapping and diverse number of objects, the routing mechanism has better representation power against the pooling operation.
Last, we compare CapsNet with different pose structures. The CapsNet with vectorstructured poses works better than the CapsNet with matrixstructured poses ( vs ). However, the former requires more parameters, more memory usage, and more inference time. If we increase the number of parameters in the matrixpose CapsNet to , its test accuracy rises to . Nevertheless, the model now requires more memory usage and inference time as compared to using vectorstructured poses. We conclude that more performance can be extracted from vectorstructured poses but at the cost of high memory usage and inference time.
7 Related Work
The idea of grouping a set of neurons into a capsule was first proposed in Transforming AutoEncoders (Hinton et al., 2011). The capsule represented the multiscale recognized fragments of the input images. Given the transformation matrix, Transforming AutoEncoders learned to discover capsules’ instantiation parameters from an affinetransformed image pair. Sabour et al. (2017) extended this idea to learn partwhole relationships in images systematically. Hinton et al. (2018) cast the routing mechanism as fitting a mixture of Gaussians. The model demonstrated an impressive ability for recognizing objects from novel viewpoints. Recently, Stacked Capsule AutoEncoders (Kosiorek et al., 2019) proposed to segment and compose the image fragments without any supervision. The work achieved SOTA results on unsupervised classification. However, despite showing promising applications by leveraging inherent structures in images, the current literature on capsule networks has only been applied on datasets of limited complexity. Our proposed new routing mechanism instead attempts to apply capsule networks to more complex data.
Our model also relates to Transformers (Vaswani et al., 2017) and Set Transformers (Lee et al., 2019), where dotproduct attention is also used. In the language of capsules, a Set Transformer can be seen as a model in which a higherlevel unit can choose to pay attention to lowerlevel units (using attention heads). Our model inverts the attention direction (lowerlevel units “attend” to parents), enforces exclusivity among routing to parents and does not impose any limits on how many lowerlevel units can be routed to any parent. Therefore, it combines the ease and parallelism of dotproduct routing derived from a Transformer, with the interpretability of building a hierarchical parsing of a scene derived from capsule networks.
There are other works presenting different routing mechanisms for capsules. Wang & Liu (2018) formulated the Dynamic routing (Sabour et al., 2017) as an optimization problem consisting of a clustering loss and a KL regularization term. Zhang et al. (2018)
generalized the routing method within the framework of weighted kernel density estimation.
Li et al. (2018) approximated the routing process with two branches and minimized the distributions between capsules layers by an optimal transport divergence constraint. Phaye et al. (2018) replaced standard convolutional structures before capsules layers by densely connected convolutions. It is worth noting that this work was the first to combine SOTA CNN backbones with capsules layers. Rajasegaran et al. (2019) proposed DeepCaps by stacking capsules layers. It achieved test accuracy on CIFAR10, which was the previous best for capsule networks. Instead of looking for agreement between capsules layers, Choi et al. (2019) proposed to learn deterministic attention scores only from lowerlevel capsules. Nevertheless, without agreement, their bestperformed model achieved only test accuracy on CIFAR10. In contrast to these prior work, we present a combination of inverted dotproduct attention routing, layer normalization, and concurrent routing. To the best of our knowledge, we are the first to show that capsule networks can achieve comparable performance against SOTA CNNs. In particular, we achieve test accuracy for CIFAR10 and for CIFAR100.8 Conclusion and Future Work
In this work, we propose a novel Inverted DotProduct Attention Routing algorithm for Capsule networks. Our method directly determines the routing probability by the agreements between parent and child capsules. Routing algorithms from prior work require child capsules to be explained by parent capsules. By removing this constraint, we are able to achieve competitive performance against SOTA CNN architectures on CIFAR10 and CIFAR100 with the use of a low number of parameters. We believe that it is no longer impractical to apply capsule networks to datasets with complex data distribution. Two future directions can be extended from this paper:

In the experiments, we show how capsules layers can be combined with SOTA CNN backbones. The optimal combinations between SOTA CNN structures and capsules layers may be the key to scale up to a much larger dataset such as ImageNet.

The proposed concurrent routing is as a parallelintime and weighttied inference process. The strong connection with Deep Equilibrium Models (Bai et al., 2019) can potentially lead us to infiniteiteration routing.
References
 Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
 Bai et al. (2019) Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. Deep equilibrium models. In Neural Information Processing Systems (NeurIPS), 2019.
 Choi et al. (2019) Jaewoong Choi, Hyun Seo, Suee Im, and Myungju Kang. Attention routing between capsules. arXiv preprint arXiv:1907.01750, 2019.

He et al. (2016)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 770–778, 2016.  Hinton et al. (2011) Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming autoencoders. In International Conference on Artificial Neural Networks, pp. 44–51. Springer, 2011.
 Hinton et al. (2018) Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. 2018.
 Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kosiorek et al. (2019) Adam R Kosiorek, Sara Sabour, Yee Whye Teh, and Geoffrey E Hinton. Stacked capsule autoencoders. arXiv preprint arXiv:1906.06818, 2019.
 Krizhevsky et al. (2009) Alex Krizhevsky et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

Lee et al. (2019)
Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye
Teh.
Set transformer: A framework for attentionbased
permutationinvariant neural networks.
In
Proceedings of the 36th International Conference on Machine Learning
, volume 97 of Proceedings of Machine Learning Research, pp. 3744–3753, 2019.  Li et al. (2018) Hongyang Li, Xiaoyang Guo, Bo DaiWanli Ouyang, and Xiaogang Wang. Neural network encapsulation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 252–267, 2018.
 Phaye et al. (2018) Sai Samarth R Phaye, Apoorva Sikka, Abhinav Dhall, and Deepti Bathula. Dense and diverse capsule networks: Making the capsules learn better. arXiv preprint arXiv:1805.04001, 2018.
 Rajasegaran et al. (2019) Jathushan Rajasegaran, Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Suranga Seneviratne, and Ranga Rodrigo. Deepcaps: Going deeper with capsule networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10725–10733, 2019.
 Sabour et al. (2017) Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Advances in neural information processing systems, pp. 3856–3866, 2017.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017.
 Wang & Liu (2018) Dilin Wang and Qiang Liu. An optimization view on dynamic routing between capsules. 2018.

Zhang et al. (2018)
Suofei Zhang, Quan Zhou, and Xiaofu Wu.
Fast dynamic routing based on weighted kernel density estimation.
In
International Symposium on Artificial Intelligence and Robotics
, pp. 301–309. Springer, 2018.
Appendix A Model Configurations for CIFAR10/CIFAR100
a.1 Model Specifications
The configuration choices of Dynamic Routing CapsNets and EM Routing CapsNets are followed by prior work (Sabour et al., 2017; Hinton et al., 2018). We empirically find their configurations perform the best for their routing mechanisms (instead of applying our network configurations to their routing mechanisms). The optimizers are chosen to reach the best performance for all models. We list the model specifications in Table 2, 3, 4, 5, 6, 7, 8, and 9.
We only show the specifications for CapsNets with a simple convolutional backbone. When considering a ResNet backbone, two modifications are performed. First, we replace the simple feature backbone with ResNet feature backbone. Then, the input dimension of the weights after the backbone is set as . A ResNet backbone contains a convolutional layer (output dim.), three dim. residual building block (He et al., 2016)
with stride
, and four dim. residual building block with stride . The ResNet backbone returns a tensor.For the optimizers, we use stochastic gradient descent with learning rate for our proposed method, baseline CNN, and ResNet18 (He et al., 2016). We use Adam (Kingma & Ba, 2014) with learning rate for Dynamic Routing CapsNets and Adam with learning rate for EM Routing CapsNets. We decrease the learning rate by times when the model trained on epochs and epochs, and there are epochs in total.
a.2 Data Augmentations
We consider the same data augmentation for all networks. During training, we first pad four zerovalue pixels to each image and randomly crop the image to the size
. Then, we horizontally flip the image with probability . During evaluation, we do not perform data augmentation. All the model is trained on a 8GPU machine with batch size .Appendix B Model Configurations for Diverse_MultiMNIST
b.1 Model Specifications
To fairly compare CNNs and CapsNets, we fix the number of layers and the number of neurons per layer in the models. These models consider the design: 36x36 image 18x18x1024 neurons 8x8x1024 neurons 6x6x1024 neurons 640 neurons 10 class logits. The configurations are presented in Table 10, 11, and 12. We also fix the optimizers across all the models. We use stochastic gradient descent with learning rate and decay the learning rate by times when the models trained on steps and steps. One step corresponds to training samples, and we train the models with a total of steps.
b.2 Dataset Construction
Diverse_MultiMNIST contains both singledigit and overlappingdigit images. We generate images on the fly and plot the test accuracy for training models over ( = (steps) (images)) generated images. We also generate the test images, and for each evaluation step, there are test images. Note that we make sure the training and the test images are from the disjoint set. In the following, we shall present how we generate the images. We set the probability of generating a singledigit image as and the probability of generating an overlappingdigit image as .
The singledigit image in DiverseMultiMNIST training/ test set is generated by shifting digits in MNIST (LeCun et al., 1998) training/ test set. Each digit is shifted up to pixels in each direction and results in image.
Following Sabour et al. (2017), we generate overlappingdigit images in DiverseMultiMNIST training/ test set by overlaying two digits from the same training/ test set of MNIST. Two digits are selected from different classes. Before overlaying the digits, we shift the digits in the same way which we shift for the digit in a singledigit image. After overlapping, the generated image has size .
We consider no data augmentation for both training and evaluation. All the model is trained on a 8GPU machine with batch size .
Operation  Output Size 

input_dim=3, output_dim=1024, 3x3 conv, stride=2, padding=1  16x16x1024 
ReLU  
input_dim=1024, output_dim=1024, 3x3 conv, stride=2, padding=1  8x8x1024 
ReLU + Batch Norm  
2x2 average pooling, padding=0  4x4x1024 
input_dim=1024, output_dim=1024, 3x3 conv, stride=2, padding=1  2x2x1024 
ReLU + Batch Norm  
2x2 average pooling, padding=0  1x1x1024 
Flatten  1024 
input_dim=1024, output_dim=10, linear  10 
Operation  Output Size 

input_dim=3, output_dim=1024, 3x3 conv, stride=2, padding=1  16x16x1024 
ReLU  
input_dim=1024, output_dim=1024, 3x3 conv, stride=2, padding=1  8x8x1024 
ReLU + Batch Norm  
2x2 average pooling, padding=0  4x4x1024 
input_dim=1024, output_dim=1024, 3x3 conv, stride=2, padding=1  2x2x1024 
ReLU + Batch Norm  
2x2 average pooling, padding=0  1x1x1024 
Flatten  1024 
input_dim=1024, output_dim=100, linear  100 
Operation  Output Size 

input_dim=3, output_dim=256, 9x9 conv, stride=1, padding=0  24x24x256 
ReLU  
input_dim=256, output_dim=256, 9x9 conv, stride=2, padding=0  8x8x256 
Capsules reshape  8x8x32x8 
Squash  
Capsules flatten  2048x8 
Linear Dynamic Routing to 10 16dim. capsules  10x16 
Squash 
Operation  Output Size 

input_dim=3, output_dim=256, 9x9 conv, stride=1, padding=0  24x24x256 
ReLU  
input_dim=256, output_dim=256, 9x9 conv, stride=2, padding=0  8x8x256 
Capsules reshape  8x8x32x8 
Squash  
Capsules flatten  2048x8 
Linear Dynamic Routing to 100 16dim. capsules  100x16 
Squash 
Operation  Output Size  

input_dim=3, output_dim=256, 4x4 conv, stride=2, padding=1  16x16x256  
Batch Norm + ReLU  



Capsules reshape (only for poses) 


Conv EM Routing to 32 4x4dim. capsules, 3x3 conv, stride=2 


Conv EM Routing to 32 4x4dim. capsules, 3x3 conv, stride=1 


Capsules flatten 


Linear EM Routing to 10 4x4dim. capsules 

Operation  Output Size  

input_dim=3, output_dim=256, 4x4 conv, stride=2, padding=1  16x16x256  
Batch Norm + ReLU  



Capsules reshape (only for poses) 


Conv EM Routing to 32 4x4dim. capsules, 3x3 conv, stride=2 


Conv EM Routing to 32 4x4dim. capsules, 3x3 conv, stride=1 


Capsules flatten 


Linear EM Routing to 100 4x4dim. capsules 

Operation  Output Size  
input_dim=3, output_dim=256, 3x3 conv, stride=2, padding=1  16x16x256  
ReLU  

16x16x512  
Capsules reshape  16x16x32x4x4  

7x7x32x4x4  

5x5x32x4x4  
Capsules flatten  800x4x4  

10x4x4  
Reshape  10x16  
input_dim=16, output_dim=1, linear  10x1  
Reshape  10 
Operation  Output Size  
input_dim=3, output_dim=128, 3x3 conv, stride=2, padding=1  16x16x128  
ReLU  

16x16x1152  
Capsules reshape  16x16x32x6x6  

7x7x32x6x6  

5x5x32x6x6  
Capsules flatten  800x6x6  

20x6x6  

100x6x6  
Reshape  100x36  
input_dim=36, output_dim=1, linear  100x1  
Reshape  100 
Operation  Output Size  
input_dim=3, output_dim=1024, 3x3 conv, stride=2, padding=1  18x18x1024  
ReLU  

8x8x1024  
Capsules reshape  8x8x16x8x8  

6x6x16x8x8  
Capsules flatten  576x8x8  

10x8x8  
Reshape  10x64  

10x1  
Reshape  10 
Operation  Output Size  
input_dim=3, output_dim=1024, 3x3 conv, stride=2, padding=1  18x18x1024  
ReLU  

8x8x1024  
Capsules reshape  8x8x16x64  

6x6x16x64  
Capsules flatten  576x64  

10x64  

10x1  
Reshape  10 
Operation  Output Size  
input_dim=3, output_dim=1024, 3x3 conv, stride=2, padding=1  18x18x1024  
ReLU  

8x8x1024  

6x6x1024  
input_dim=1024, output_dim=640, linear  6x6x640  
6x6 average pooling, padding=0  1x1x640  
Flatten  640  

10 
Comments
There are no comments yet.