Implementation of ResMLP, an all MLP solution to image classification, in Pytorch
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.READ FULL TEXT VIEW PDF
The strong performance of vision transformers on image classification an...
Recent papers have suggested that transfer learning can outperform
Convolutional Neural Networks (CNNs) are the go-to model for computer vi...
We propose the use of incomplete dot products (IDP) to dynamically adjus...
We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-for...
The Residual Networks of Residual Networks (RoR) exhibits excellent
Standard training techniques for neural networks involve multiple source...
Implementation of ResMLP, an all MLP solution to image classification, in Pytorch
Bag of MLP
PyTorch implementation of ResMLP: Feedforward networks for image classification with data-efficient training
Recently, the transformer architecture vaswani2017attention
, adapted from its original use in natural language processing with only minor changes, has achieved performance competitive with the state of the art on ImageNet-1kRussakovsky2015ImageNet12 when pre-trained with a sufficiently large amount of data dosovitskiy2020image
. Retrospectively, this achievement is yet another step towards less priors: convolutional neural networks had removed a lot of hand-made choices compared to hand-designed pre-CNN approaches, moving the paradigm of hard-wired features to hand-designed architectural choices. Vision transformers avoid making assumptions inherent to convolutional architectures and noticeably the translation invariance.
What these recent transformer-based works suggest is that longer training schedules, more parameters, more data dosovitskiy2020image and/or more regularization Touvron2020TrainingDI , are sufficient to recover the important priors for tasks as complex as ImageNet classification. See also our discussion of related work in Section 4. This concurs with recent studies bello2021revisiting ; dollar2021fast that better disentangle the benefits from the architectures from those of the training scheme.
In this paper, we push this trend further, and propose Residual Multi-Layer Perceptrons (ResMLP): a purely multi-layer perceptron (MLP) based architecture for image classification. We outline our architecture in Figure 1 and detail it further in Section 2
. It is intended to be simple: it takes flattened patches as input, projects them with a linear layer, and sequentially updates them in turn with two residual operations: (i) a simple linear layer that provides interaction between the patches, which is applied to all channels independently; and (ii) an MLP with a single hidden layer, which is independently applied to all patches. At the end of the network, the patches are average pooled, and fed to a linear classifier.
This architecture is strongly inspired by the vision transformers (ViT) dosovitskiy2020image , yet it is much simpler in several ways: we do not use any form of attention, only linear layers along with the GELU non-linearity. Since our architecture is much more stable to train than transformers, we do not need batch-specific or cross-channel normalizations such as BatchNorm, GroupNorm or LayerNorm. Our training procedure mostly follows the one initially introduced for DeiT Touvron2020TrainingDI and CaiT touvron2021going .
Due to its linear nature, the patch interactions in our model can be easily visualised and interpreted. While the interaction pattern learned in the first layer is very similar to a small convolutional filter, we observe more subtle interactions across patches in deeper layers. These includes some form of axial filters, and long-range interactions early in the network.
In summary, in this paper, we show that
despite their simplicity, Residual Multi-Layer Perceptrons can reach surprisingly good accuracy/complexity trade-offs with ImageNet-1k training only111Concurrent work by Tolstikhin et al. tolstikhin2021MLPMixer brings complementary insights to ours: they achieve interesting performance with larger MLP models pre-trained on the larger public ImageNet-21k and even more data with the proprietary JFT-300M. In contrast, we focus on faster models trained on ImageNet-1k. Another concurrent work is the report by Melas-Kyriazi melaskyriazi2021doyoueven ., without requiring normalization based on batch or channel statistics;
these models benefit significantly from distillation methods Touvron2020TrainingDI ;
thank to its design where patch embeddings simply “communicate” through a linear layer, we can make observations on what kind of spatial interaction the network learns across layers.
Our model, depicted in Figure 1, is inspired by the ViT model, of which it adopts the path flattening structure. We proceed to drastic simplifications. We refer the reader to Dosovitskiy et al. dosovitskiy2020image for more details about the ViT architecture.
Our model, denoted by ResMLP, takes a grid of non-overlapping patches as input, where is typically equal to . The patches are then independently passed through a linear layer to form a set of -dimensional embeddings.
The resulting set of embeddings are fed to a sequence of Residual Multi-Layer Perceptron layers to produce a set of -dimension output embeddings. These output embeddings are then averaged as a -dimension vector to represent the image, which is fed to a linear classifier to predict the label associated with the image. Training uses the cross-entropy loss.
Our network is a sequence of layers that all have the same structure: a linear sublayer followed by a feedforward sublayer. Similar to the Transformer layer, each sublayer is paralleled with a skip-connection he2016deep . We do not apply Layer Normalization ba2016layer because training is stable without it when using the following Affine transformation:
where and are learnable vectors. This operation simply rescales and shifts the input component-wise. Moreover, it has no cost at inference time, as it can fused in the adjacent linear layer. Note, when writing the operation is applied independently to each column of . While similar to BatchNorm Ioffe2015BatchNA and Layer Normalization ba2016layer , the Aff operator does not depend on any batch statistics. Therefore, it is closer to the recent LayerScale method touvron2021going , which improves the optimization of deep transformers when initializing to a small value. Note that LayerScale does not have a bias term.
We apply this transformation twice for each residual block. As as a pre-normalization Aff replaces the LayerNormalization, and avoids using channel-wise statistics. Here, we initialize , and . As a post-processing of the residual block, Aff implements LayerScale and therefore we follow the same small value initialization for as in touvron2021going for the post-normalization. Both transformations are integrated to the linear layers at inference.
Overall, our Multi-perceptron layer takes a set of -dimensional input features stacked in a matrix , and outputs a set of -dimension output features, stacked in a matrix with the following set of transformations:
where , and are the main learnable parameters of the layer. The dimensions of the parameter matrix are , i.e, this sublayer mixes the information from all the locations, while the feedforward sublayer works per location. As a consequence, the intermediate activation matrix has the same dimensions as the matrices and . Finally, te parameter matrix and have the same dimensions as in a Transformer layer, that are and respectively.
The main difference compared to a Transformer layer is that we replace the self-attention by the linear interaction defined in Eq. (2). While self-attention computes a convex combination of other features with coefficients that are data dependent, the linear interaction layer in Eq. (2) computes a general linear combination using learned coefficients that are not data dependent. As compared to a convolutional layers which have local support and share weights across space, our linear patch interaction layer offers a global support and does not share weights, moreover it is applied independently across channels.
Our model can be regarded as a drastic simplification of the ViT model by Dosovitskiy et al. dosovitskiy2020image . We depart from this model as follows:
We do not include any self-attention block. Instead we have a linear patch interaction layer without non-linearity.
We do not have the extra “class” token that is typically used in these models to aggregate information via attention. Instead, we simply use average pooling. We do, however, also consider a specific aggregation layer as a variant, which we describe in the next paragraph.
Similarly, we do not include any form of positional embedding: it is is not required as the linear communication module between patches implicitly takes into account the patch position.
Instead of pre-LayerNormalization, we use a simple learnable affine transform, thus avoiding any form of batch and channel-wise statistics.
As an alternative to average pooling, we also experimented with an adaptation of the class-attention introduced in CaiT touvron2021going . It consists of two layers that have the same structure as the transformer, but in which only the class token is updated based on the frozen patch embeddings. We translate this method to our network, by replacing the attention-based interaction between the class and patch embeddings by simple linear layers. This increases the performance, at the expense of adding some parameters and computational cost. We refer to this pooling variant as “class-MLP”.
In this section, we present experimental results for our ResMLP architecture for image classification. We also study the impact of the different components in the ResMLP architecture in a series of ablations.
We train our models on the ImageNet-1k dataset Russakovsky2015ImageNet12 , that contains 1.2M images evenly spread over 1,000 object categories. In the absence of an available test set for this benchmark, we follow the standard practice in the community by reporting performance on the validation set. This is not ideal since the validation set was originally designed to select hyper-parameters. Comparing methods on this set may not be conclusive enough because an improvement in performance may not be caused by better modeling, but by a better selection of hyper-parameters. To mitigate this risk, we report additional results on two alternative versions of ImageNet that have been built to have distinct validation and test sets, namely the ImageNet-real Beyer2020ImageNetReal and ImageNet-v2 Recht2019ImageNetv2 datasets. Our hyper-parameters are mostly adopted from Touvron et al. Touvron2020TrainingDI ; touvron2021going .
We consider two training paradigms in our experiments:
Supervised learning: We train ResMLP from labeled images with a softmax classifier and cross-entropy loss. This paradigm is the main focus of our work.
Knowledge distillation: We employ the knowledge distillation procedure proposed by Touvron et al. Touvron2020TrainingDI to guide the training of ResMLP with a convnet. Note that in this paradigm, the ResMLP architecture does not require access to the labels during training, only to an existing pre-trained model.
In the case of supervised learning, we train our network with Lamb optimizer you20lamb with a learning rate of and weight decay . We initialize the LayerScale parameters as a function of the depth by following off-the-shelf those proposed by Touvron et al. touvron2021going for CaiT. The rest of the hyper-parameters follow the default setting used in DeiT Touvron2020TrainingDI . For the knowledge distillation paradigm, we use the same RegNety-16GF Radosavovic2020RegNet as in DeiT with the same training schedule.
|State of the art||CaiT-M48448 touvron2021going||356||5.4||329.6||5477.8||86.5|
|NfNet-F6 SAM Brock2021HighPerformanceLI||438||16.0||377.3||5519.3||86.5|
|Convolutional networks||EfficientNet-B3 tan2019efficientnet||12||661.8||1.8||1174.0||81.1|
|Transformer networks||DeiT-S Touvron2020TrainingDI||22||940.4||4.6||217.2||79.8|
In this section, we compare our architecture with standard neural networks of comparable size and throughput on ImageNet.
In Table 1, we compare ResMLP with different convolutional and Transformer architectures. For completeness, we also report the best-published numbers obtained with a model trained on ImageNet alone. As expected, in terms of the trade-off between accuracy, FLOPs, and throughput, ResMLP is not as good as convolutional networks or Transformers. However, their accuracy is very encouraging. Indeed, we compare them with architectures that have benefited from years of research and careful optimization towards these trade-offs. Overall, our results suggest that the structural constraints imposed by the layer design do not have a drastic influence on performance, especially when training models with enough data and modern advances in training and regularization.
We also study our model when training following the knowledge distillation paradigm from Touvron et al. Touvron2020TrainingDI . In their work, the authors show the impact of training a ViT model by distilling it from an EfficientNet. In this experiment, we explore if ResMLP also benefits from this procedure and summarize our results in Table 2. We observe that similar to DeiT models, ResMLP greatly benefits from distilling from a convnet. This result concurs with the observations made by d’Ascoli et al. d2019finding , who used convnets to initialize feedforward networks. Even though our setting differs from theirs in scale, the problem of overfitting for feedforward networks is still very much present on ImageNet. The additional regularization obtained from the distillation is a possible explanation for this improvement.
We evaluate the quality of features obtained from a ResMLP architecture when transferring them to other domains. The goal is to assess if the features generated from a feedforward network are more prone to overfitting on the training data distribution.
We adopt the typical setting where we pre-train a model on ImageNet-1k and fine-tune it on the training set associated with a specific domain. We report the performance with different architectures on different image benchmarks in Table 3
, namely CIFAR-10 and CIFAR-100krizhevsky2009learning , Flowers-1022 Nilsback08 Cars2013 and iNaturalist Horn2019INaturalist . We refer the reader to the corresponding references for a more detailed description of the different datasets. We observe that the performance of our ResMLP are competitive with the existing architectures, showing that pretraining feedforward models with enough data and regularization via data augmentation greatly reduces their tendency to overfit on the original distribution. Interestingly, this regularization also prevents them from overfitting on the training set of smaller dataset during the fine-tuning stage.
Evaluation on transfer learning.We compare models trained on ImageNet on for transfer to datasets covering different domains. The ResMLP architecture takes 224224 images during training and transfer, while the ViTs and EfficientNet-B7 work with higher resolutions, see “res.” column.
|Layer 1||Layer 4||Layer 7|
|Layer 10||Layer 20||Layer 22|
Because they are linear, our patch interaction layers from Eq. (2) are easily interpretable. In Fig 2 we visualise the rows of the interaction matrices as images, for our ResMLP-24 model. The early layers show convolution-like patterns: the learned weights resemble shifted versions of each other and have local support. Interestingly, in many layers, the support also extends along both axes, most prominently seen in layer seven. The last seven layers of the network are different: they consist of a spike for the patch itself and a diffuse response across other patches with larger or smaller magnitude; see layers 20 and 22.
The visualizations described above suggest that the linear communication layers are sparse. We analyze this quantitatively in more detail in Fig. 4. We measure the sparsity of the matrix , and compare it to the sparsity of and from the per-patch MLP. Since there are no exact zeros, we measure the rate of components whose absolute value is lower than 5% of the maximum value. Note, discarding the small values is analogous to the case where we normalize the matrix by its maximum and use a finite-precision representation of weights. For instance, with a 4-bits representation of weight, one would typically round to zero all weights whose absolute value is below 6.25% of the maximum value.
The measurements in Fig. 4 show that all three matrices are sparse, with the layers implementing the patch communication being significantly more so. This suggests that they may be compatible with parameter pruning, or better, with modern quantization techniques that induce sparsity at training time, such as Quant-Noise fan2020training and DiffQ defossez2021differentiable . The sparsity structure, in particular in earlier layers (see Fig. 2), hints that we could implement the patches mixing linear layer with a convolution. This line of research on network compression is beyond the scope of our paper, yet we believe it worth investigating in the future.
Since MLPs are subject to overfitting, we show in Fig. 4 a control experiment to probe for problems with generalization. We explicitly analyze the differential of performance between the ImageNet-val and the distinct ImageNet-V2 test set. The degree of overfitting of our MLP-based model is overall comparable to that of other transformer-based architectures or convnets.
Table 4 reports the ablation study of our base network and a summary of our preliminary exploratory studies. We discuss the ablation below and give more detail about the early experiments in Appendix A.
As discussed when presenting the visualization, the linear layers (that implicitly exploit the patch position) look like convolutions for most of the layers. In this experiment, we have replaced the linear layer with a convolution operating on patches of dimension . Our ablation shows that this choice improves the performance, showing that low-resolution convolutions at all layers is an interesting alternative to the most common design of convnets, where early layers operate at high resolution and small feature dimension.
Our primary network configuration does not contain any batch normalizations. Instead, we use affine per-channel transforms such as Layer Normalizationba2016layer , typically used in transformers. In preliminary experiments with pre-norm and post-norm He2016IdentityMappings , we observed that both choices lead to convergence. Pre-normalization in conjunction with Batch Normalization could provide an accuracy gain in some cases (see Appendix A). However, for the sake of simplicity, we preferred not to introduce any dependency on batch statistics, which is why we resorted on the Aff operator only.
In our early exploration, we evaluated several alternative design choices. We summarize our main findings below:
Block design. We have tried several variants for the patch interaction layer. Amongst them, using the same MLP structure as for patch processing. In our experiments, the simpler choice of a single linear layer led to better performance while being more efficient. Moreover, it requires fewer parameters than a residual MLP block.
Positional encoding and class token. As in transformers, we could use positional embeddings mixed with the input patches. In our experiments, we did not see any benefit from using these features. This observation suggests that our linear patch interaction layer provides sufficient spatial communication. Referencing absolute positions obviates the need for any form of positional encoding.
Class-MLP. In contrast, specialized layers extracting information from image patches with a class embedding increase performance by +0.5% top-1 acc. This improvement is comparable to adding the same number of layers in the main network. However, using these specialized layers is more efficient.
|#layers||Supervision||Norm.||Pooling||Patch||top-1 acc. on ImageNet|
We review the research on applying Fully Connected Network (FCN) for computer vision problems as well as other architectures that shares common modules with our model.
Many studies have shown that FCNs are competitive with convnets for the tasks of digit recognition simard2003best ; cirecsan2012deep , keyword spotting chatelain2006extraction and handwritting recognition bluche2015deep . Several works urban2016deep ; mocanu2018scalable ; lin2015far have questioned if FCNs are also competitive on natural image datasets, such as CIFAR-10 krizhevsky2009learning . More recently, d’Ascoli et al. d2019finding have shown that a FCN initialized with the weights of a pretrained convnet achieves performance that are superior than the original convnet. Neyshabur neyshabur2020towards further extend this line of work by achieving competitive performance by training an FCN from scratch but with a regularizer that constraint the models to be close to a convnet. These studies have been conducted on small scale datasets with the purpose of studying the impact of architectures on generalization in terms of sample complexity du2018many and energy landscape keskar2016large . In our work, we show that, in the larger scale setting of ImageNet, FCNs can attain surprising accuracy without any constraint or initialization inspired by convnets.
Finally, the application of FCN networks in computer vision have also emerged in the study of the properties of networks with infinite width novak2018bayesian , or for inverse scattering problems khoo2019switchnet
. More interestingly, the Tensorizing Networknovikov2015tensorizing is an to approximation of very large FCN that shares similarity with our model, in that they intend to remove prior by approximating even more general tensor operations, i.e., not arbitrarily marginalized along some pre-defined sharing dimensions. However, their method is designed to compress the MLP layers of a standard convnets.
Our FCN architecture shares several components with other architectures, such as convnets lecun1998gradient ; Krizhevsky2012AlexNet or Transformers vaswani2017attention . A fully connected layer is equivalent to a convolution layer with a receptive field, and several work have explored convnet architectures with small receptive fields. For instance, the VGG model Simonyan2015VGG uses convolutions, and later, other architectures such as the ResNext xie2017aggregated or the Xception chollet2017xception mix and convolutions. In contrast to convnets, in our model interaction between patches is obtained via a linear layer that is shared across channels, and that relies on absolute rather than relative positions.
More recently, Transformers have emerged as a promising architecture for computer vision child2019generating ; dosovitskiy2014discriminative ; parmar2018image ; Touvron2020TrainingDI ; zhao2020exploring . In particular, our architecture takes inspiration from the structure used in the Vision Transformer (ViT) dosovitskiy2014discriminative , and as consequence, shares many components. Our model takes a set of non-overlapping patches as input and passes them through a series of MLP layers that share the same structure as the Transformer, replacing the self-attention layer with a linear patch interaction layer. Both layers have a global field-of-view, unlike convolutional layers. Whereas in self-attention the weights to aggregate information from other patches are data dependent through queries and keys, in ResMLP the weights are not data dependent and only based on absolute positions of patches. In our implementation we follow the improvements of DeiT Touvron2020TrainingDI to train vision transformers, use the skip-connections from the ResNet he2016deep with pre-normalization of the layers chen2018best ; He2016IdentityMappings .
Finally, our work questions the importance of self-attention in the performance of vision transformers, or at least whether the performance increase they provide justify the training challenges that they raise. Similar observations have been made in natural language processing. Notably, the Synthesizer tay2020synthesizer shows that dot-product self-attention can be replaced by a feedforward network, with competitive performance on sentence representation benchmarks. As opposed to our work, the Synthesizer does use data dependent weights, but in contrast to transformers, the weights are determined from the query point only.
In this paper we have shown that a simple residual architecture, whose residual blocks consist of a one-hidden layer feed-forward network and a linear patch interaction layer, achieves an unexpectedly high performance on ImageNet classification benchmarks, provided that we adopt a modern training strategy such as those recently introduced for transformer-based architecture. Thanks to their simple structure, with linear layers as the main mean of communication between patches, we can vizualize the filters inherently learned by this simple MLP. While some of the layers are similar to convolutional filters, we also observe sparse long-range interactions as early as the second layer of the network. We hope that our spatial prior-free model will contribute to further understanding of what networks with less priors learn, and potentially guide the design choices of future networks without the pyramidal design prior adopted by most convolutional neural networks.
We would like to thank Mark Tygert for relevant references. This work builds upon the Timm library pytorchmodels by Ross Wightman.
The best of both worlds: Combining recent advances in neural machine translation.In Annual Meeting of the Association for Computational Linguistics, 2018.
Xception: Deep learning with depthwise separable convolutions.In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
Deep big multilayer perceptrons for digit recognition.In Neural networks: tricks of the trade, pages 581–598. Springer, 2012.
How many samples are needed to estimate a convolutional neural network?In NeurIPS, pages 371–381, 2018.
International Conference on Machine Learning, 2011.
Fine-tuning CNN image retrieval with no human annotation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
As discussed in the main paper, our work on designing a residual multi-layer perceptron was inspired by the Vision Transformer. For our exploration, we have adopted the recent CaiT variant touvron2021going as a starting point. This transformer-based architecture achieves state-of performance with Imagenet-training only (achieving 86.5% top-1 accuracy on Imagenet-val for the best model). Most importantly, the training is relatively stable with increasing depth.
In our exploration phase, our objective was to radically simplify this model. For this purpose, we have considered the Cait-S24 model for faster iterations. This network consists of 24-layer with a working dimension of 384. All our experiments below were carried out with images in resolution 224224 and patches. Trained with regular supervision, Cait-S24 attains 82.7% top-1 acc. on Imagenet.
The self-attention can be seen a weight generator for a linear transformation on the values. Therefore, our first design modification was to get rid of the self-attention by replacing it by a residual feed-forward network, which takes as input thetransposed set of patches instead of the patches. In other terms, in this case we alternate residual blocks operating along the channel dimension with some operating along the patch dimension. In that case, the MLP replacing the self-attention consists of the sequence of operations
— linear — GELU — linear —
Hence this network is symmetrical in and . By keeping the other elements identical to CaiT, the accuracy drops to (-2.5%) when replacing self-attention layers.
If we further replace the class-attention layer of CaiT by a MLP as described in our paper, then we obtain an attention-free network whose top-1 accuracy on Imagenet-val is 79.2%, which is comparable to a ResNet-50 trained with a modern training strategy. This network has served as our baseline for subsequent ablations. Note that, at this stage, we still include LayerScale, a class embedding (in the class-MLP stage) and positional encodings.
The same model trained with distillation inspired by Touvron et al. Touvron2020TrainingDI achieves 81.5%. The distillation variant we choose corresponds to the “hard-distillation”, whose main advantage is that it does not require any parameter-tuning compared to vanilla cross-entropy. Note that, in all our experiments, this distillation method seems to bring a gain that is complementary and seemingly almost orthogonal to other modifications.
We have tried different activations on top of the aforementioned MLP-based baseline, and kept GeLU for its accuracy and to be consistent with the transformer choice.
For the MLP that replaced the class-attention, we have explored different sizes of the latent layer, by adjusting the expansion factor in the sequence: linear — GELU — linear . For this experiment we used average pooling to aggregating the patches before the classification layer.
|Imnet-val top-1 acc.||78.6||79.2||79.2||79.3||78.8||78.8|
We observe that a large expansion factor is detrimental in the patch communication, possibly because we should not introduce too much capacity in this residual block. This has motivated the choice of adopting a simple linear layer of size : This subsequently improved performance to in a setting comparable to the table above. Additionally, as shown earlier this choice allows visualizations of the interaction between patches.
On top of our MLP baseline, we have tested different variations for normalization layers. We report the variation in performance below.
|no norm (Aff)||+0.4%|
For the sake of simplicity, we therefore adopted only the Aff transformation so as to not depend on any batch or channel statistics.
In our experiments, removing the position encoding does not change the results when using a MLP or a simple linear layer as a communication mean across patch embeddings. This is not surprising considering that the linear layer implicitly encodes each patch identity as one of the dimension, and that additionally the linear includes a bias that makes it possible to differentiate the patch positions before the shared linear layer.
In this section we further analyze the linear interaction layers in 12-layer models.
In Figure B.1 we consider a ResMLP-12 model trained on the ImageNet-1k dataset, as explained in Section 3.1, and show all the 12 linear patch interaction layers. The linear interaction layers in the supervised 12-layer model are similar to those observed in the 24-layer model in Figure 2.
We also provide the corresponding sparsity measurements for this model in Figure B.2, analogous to the measurements in Figure 4 for the supervised 24-layer model. The sparsity levels in the supervised 12-layer model (left panel) are similar to those observes in the supervised 24-layer model, cf. Figure 4. In the right panel of Figure B.2 we consider the sparsity levels of the Distilled 12-layer model, which are overall similar to those observed for supervised the 12-layer and 24-layer models.
|Layer 1||Layer 2||Layer 3|
|Layer 4||Layer 5||Layer 6|
|Layer 7||Layer 8||Layer 9|
|Layer 10||Layer 11||Layer 12|
In Algorithm 1 we provide the pseudo-pytorch-code associated with our model.