Density-embedding layers: a general framework for adaptive receptive fields

06/23/2020 ∙ by Francesco Cicala, et al. ∙ University of Trieste 0

The effectiveness and performance of artificial neural networks, particularly for visual tasks, depends in crucial ways on the receptive field of neurons. The receptive field itself depends on the interplay between several architectural aspects, including sparsity, pooling, and activation functions. In recent literature there are several ad hoc proposals trying to make receptive fields more flexible and adaptive to data. For instance, different parameterizations of convolutional and pooling layers have been proposed to increase their adaptivity. In this paper, we propose the novel theoretical framework of density-embedded layers, generalizing the transformation represented by a neuron. Specifically, the affine transformation applied on the input is replaced by a scalar product of the input, suitably represented as a piecewise constant function, with a density function associated with the neuron. This density is shown to describe directly the receptive field of the neuron. Crucially, by suitably representing such a density as a linear combination of a parametric family of functions, we can efficiently train the densities by means of any automatic differentiation system, making it adaptable to the problem at hand, and computationally efficient to evaluate. This framework captures and generalizes recent methods, allowing a fine tuning of the receptive field. In the paper, we define some novel layers and we experimentally validate them on the classic MNIST dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks (CNN) are a standard architecture for working on tasks involving signals, in particular for visual tasks. They provided numerous state-of-the-art results on popular benchmarks He2016DeepRL ; He2015DelvingDI ; Krizhevsky2012ImageNetCW ; Szegedy2015GoingDW ; Tan2019EfficientNetRM , and they continue to receive a lot of interest because of their ability to learn on complex tasks with much less parameters than required by a fully-connected network. The convolutional layer has the property to make an efficient use of its shared weights by means of sparse interactions with the input signal. Every neuron of a convolutional layer will apply an affine transformation only to a local region of the input, and these layers are arranged in a hierarchical structure LeCun2010ConvolutionalNA , as supported by the neuroscientific study on visual cortex Hubel1962ReceptiveFB ; Hassabis2017NeuroscienceInspiredAI ; Yan2020RevealingFS ; Ukita2018CharacterizationON . Convolutional layers are often combined with pooling layers, which allow to reduce the signal dimensionality while preserving its relevant features.

The effectiveness of this range of methods can be interpreted in light of their receptive field. The receptive field of a neuron with respect to an input signal corresponds to the region of the signal which will affect the output of the neuron Le2017WhatAT

, i.e. the region whose variation will induce a variation in the neuron’s output. Convolutional and pooling layers make use of a convenient reshaping of the neurons’ receptive field, and they take advantage of some general properties of temporal and spatial signals (i.e., time series and images). Specifically, the max pooling is supported by biologically-inspired arguments

Riesenhuber1997JustOV ; Serre2010ANA , and it has an established importance in improving the performance of convolutional architectures Boureau2010ATA . Moreover, many methods have been proposed to increase the flexibility of these layers Saeedan2018DetailPreservingPI ; Lee2018GeneralizingPF ; Lee2019LearningRF ; Kobayashi2019GaussianBasedPF ; Kobayashi2019GlobalFG ; Han2018OptimizingFS . These methods parameterize the underlying receptive field so that it can adapt to data.

The receptive field has a fundamental relevance in determining the performance of neural networks on visual tasks, since the output must be responsive with respect to a large enough area of the input. It has been noted that size is not a sufficient measure of the receptive field of a neuron, since in general it will not be uniform. In fact, it has been shown in Luo2016UnderstandingTE

that it has a gaussian distribution, and the effective receptive field is much smaller than the theoretical one.

In this work we observe how, despite the important results that have been obtained, these different methods lacks of a common theoretical ground on which they can be built and compared. In fact, the analytical development of convolutional and pooling methods is not inherited from an underlying framework. Instead, these methods and their mathematical descriptions are obtained ad hoc, and they are based upon suitable heuristic observations.

In this paper, we establish a general framework which has the potential to address this crucial issue. We firstly proceed to disentangle the affine transformation from the receptive field transformation applied by a neuron to the input , so that we can write it as

(1)

Then, we formulate the receptive field by means of a set of probability density functions

, , which will determine the regions of the input to be transformed by the neuron. However simple, we show how this approach generalizes the transformations underlying fully connected layers, convolutions, max pooling, average pooling, and min pooling. These are all particular cases of this general framework, in which they are analytically developed as a consequence of a specific choice of receptive field densities.

We will consider the case in which the input is a signal, i.e. a time series or an image. In these cases, the cardinality of the input space depends on the resolution inherent to the process which produced the data. Nonetheless, the intrinsic dimension of a data representation is, in general, independent from the resolution, and usually much lower Ansuini2019IntrinsicDO . In our framework, the input dimension and the number of parameters of a neuron are naturally untied.

The probability density functions which defines the receptive field can be flexibly parameterized, and they can depend on the input. We show how to analytically derive the transformation in (1), and we demonstrate that, under mild assumptions on the set of densities, it is differentiable. We call the layers developed according to this perspective density-embedding layers, because of the direct link between the selected densities and the transformation that they determine on the the input.

2 The generalized neuron

The artificial neuron applies an affine transformation to the input , which is followed by a non-linear activation. Hereafter, we consider only the affine step, and we leave implicit that it will be further transformed by a proper activation function.

In the conventional neuron, the affine step is expressed as

(2)

where , and are its parameters. At the foundation of our work there is the idea of generalizing the affine transformation as a scalar product of functions. By appropriately defining the parameter function and the input function we can rewrite the previous affine transformation as:

(3)

In order to simplify computations, we express these functions as linear combinations and , where and are respectively and basis functions, and have the only property to be integrable on the interval

. With slight abuse of notation, we indicate the vectors of their coefficients with

and .

In most cases, the set of functions will be the piecewise constant set of functions such that for , and otherwise. This is because the input signal is almost always provided as a vector. Moreover, in this more general representation the number of parameters and the cardinality of the input space are disentangled, i.e. the number of neuron’s parameters can be different from the input signal resolution. Note that if we also express in terms of the functions , than (3) reduces to (2), hence we obtain the conventional neuron as a special case.

By substituting the function expressions in (3) we get:

(4)

where the matrix , describes the interaction between the two bases of functions. We see that (4) expresses an affine transformation which is similar to (2). But what is the effect of on ? By fixing to a probability density function, the component of the resulting vector is:

(5)

The choice of as density functions, which will be enforced from now on, allows us to interpret as the vector of the expected values of with respect to the elements of the basis. Since is weighting the regions of the input that are transformed by the neuron, it is clear that its effect is to determine the receptive field of the neuron with respect to the input. In fact, describes directly the shape of the receptive field. Differently from (2), in (4) we can find the form expressed in (1), where the receptive field action and the affine transformation are disentangled. Therefore, we can analytically prescribe the former regardless of the latter.

We notice that the densities can be dependent on the input, and they can even be parameterized. Hence we can consider densities in the form , where is the vector of the density’s parameters, and is the vector of coefficients of the input signal.

2.1 An analytical expression for

With no further assumptions on the mathematical properties of the density functions , in general we can evaluate by numerical integration. In fact, if the densities are fixed, i.e. their parameters do not change and they do not depend on the input, it is sufficient to compute only once at the initialization of the neuron. For instance, the fully connected and the convolutional layers belong to this setting. In this case the receptive field of their individual neurons is constant. Nonetheless, we are interested in a more general setting in which the densities are able to adapt with respect to the input and can be described by learnable parameters. In this case, two problems occur:

  1. must be numerically evaluated at every new iteration, which is computationally expensive;

  2. Since numerical integration is involved, we cannot benefit from the efficiency of the existing automatic differentiation systems, which are provided in frameworks like PyTorch and Tensorflow.

We now demonstrate that these issues can be addressed by conveniently choosing densities. Let be a Riemann integrable function on the interval for every choice of and for every , and let it admit a primitive expressible by means of elementary functions on the same interval. For the second fundamental theorem of calculus,

(6)

Moreover, we can furtherly simplify this expression by assuming the most common case in which the input is given as a vector. Therefore, we can equivalently express the input function as a piecewise constant function:

(7)

From now on, we will always assume this expression for the input function. In this way, the expression for simplifies to:

(8)

Given an analytical expression of , the computation of can be performed exactly and, since is an elementary function, we can differentiate by means of any automatic differentiation system.

2.2 Extension for images

So far, we have considered 1D input signals, but the extension to the N-dimensional scenario is straightforward. In particular, we are interested in the case of 2D inputs, like images. In this case,

becomes a 4-order tensor

:

(9)

and the receptive field action is expressed by .

We observe that, by assuming separable densities, i. e. , we obtain a further simplification:

(10)

For the sake of clarity and in the light of the last expression, from now on we will consider the case of 1D input signals.

3 Density-embedding layers

The framework allow us to define a layer by specifying a set of density functions. We refer to the layers defined in this way as density-embedding layers. We will now show how different layers can be obtained by choosing appropriate sets of densities. Specifically, we demonstrate how the fully connected layer and the convolutional layer are recovered under this framework. Typically, the receptive field of the neurons of a layer is defined by means of hyperparameters, such as kernel size and stride. The type of pooling is usually selected by hand too. Moreover, in most transformations the receptive field covers the input uniformly. By suitably parameterizing the set of densities, it is possible to develop layers with a more flexible receptive field which adapts to data. We demonstrate with two simple examples how adaptive kernels and adaptive pooling can be obtained within this framework.

3.1 The fully connected layer

It order to build the fully connected layer, we observe that every neuron must have the same receptive field, i.e. . Under this framework, this translates in densities which are independent from the specific neuron, i. e. . Moreover, every density function of the receptive field collects exactly one element of the input signal. Therefore, we prescribe a set of piecewise constant densities defined on the partition of intervals , with , where represents the natural partition of the input. The element is computed as

(11)

As expected, is the identity, so that we get .

3.2 The convolutional layer

Let us consider a 1D convolutional layer, where the stride is set to and the kernel size to . The receptive field of the neuron of this layer covers uniformly the elements of the input corresponding to set of intervals , . Every interval covers a precise element of the input. Therefore, the receptive field densities of the neuron are , and we get

(12)

Every neuron has a sparse receptive field, i. e. its densities only cover a small region of the input signal, and this region is constant. By sharing the same set of weights (i.e. the kernel) among the neurons of a layer, we recover one channel of a convolutional layer. We can obtain different channels by associating different kernels to the same set of matrices .

3.3 Adaptive convolution

We extend the the receptive field densities in the last example to a parameterized form. For the sake of simplicity, we will still use a set of uniform distributions, but we define them on the intervals

(13)

where is the kernel amplitude, and it will be learned by gradient descent. It defines the extension of the local receptive field of a neuron over the input, and in this example it is shared among all the neurons of the layer. By considering that, for any set of reals ,

(14)

we can easily compute the elements of for the adaptive convolution:

(15)

where, as before, . can be automatically differentiated with respect to , hence can be learned. In the traditional convolution, the kernel amplitude is equal to the kernel size , i.e. the extension of the region of the input covered by the kernel is equal to the number of parameters of the kernel. Specifying this simple parameterization makes the kernel able to expand or contract and, eventually, to adapt its amplitude to the specific features of the data.

More important is that this is just one of many possible parameterization which could be inspected under this framework, and which could differ for effectiveness and robustness. For instance, the stride could be parameterized, or it could be defined to be proportional to the kernel amplitude . In addition, a function could be used instead of . For example, one method to bound the range of values of the kernel amplitude to an interval is to define , where is the logistic function and is the real parameter to be learned instead of .

3.4 Adaptive pooling

Defining a pooling operation means defining a receptive field, and many techniques have been proposed to adapt the pooling operation to data Kobayashi2019GaussianBasedPF ; Lee2018GeneralizingPF ; Saeedan2018DetailPreservingPI ; Kobayashi2019GlobalFG . One way to achieve it consists in parameterizing the pooling operation by means of a learnable real parameter. Starting from a specific set of densities, we show how an adaptive pooling technique can be obtained. We parameterize the set of densities by means of a parameter , and we obtain a receptive field which is able to reproduce the max pooling (), the average pooling (), and the min pooling (). Similarly to the convolutional example, we select a set of intervals defining specific regions of the input signal, and we use a set of densities which are uniform over the intervals to compute the matrix. However, we assign a general parameterization to the intervals to indicate that their features (length, position, etc.) can be learned by gradient descent. One additional core difference with respect to the previous examples is that in this setting the densities depends on the input .

Let us consider a set of intervals over the input’s domain , , where is a generic set of parameters describing arbitrary interval features. For instance, the intervals introduced in the adaptive pooling are an example of parameterized intervals. Given the usual input signal , we define the receptive field densities as

(16)

The result of the integration of the density on the interval is

(17)

where (see supplementary material for further details on the mathematical steps).

Notice that is the output of a max pooling, average pooling, and min pooling transformation respectively for , , and .

4 Experimental results

Density-embedding layers constitute a very broad family of transformations, and their formulation allow to flexibly shape the receptive field that will select the input regions to be forwarded to the next layer. Hereafter, we show two implementations of density-embedding layers based on the logistic distribution. To highlight their properties and to provide a visualization, we build two very simple networks which are constituted only by those layers. In order to show that they can provide a representation of the input which is more parsimonious but still accurate, we compare their performances with two fully-connected networks on MNIST dataset. MNIST dataset has a training set of 60,000 examples, and a test set of 10,000 examples, where every input is a image with a single channel.

We implemented the density-embedding layers with PyTorch and tested them on the MNIST dataset. We compared their performances with simple fully-connected neural networks. Every model has been trained for 20 epochs through Adam optimization, with a maximum learning rate of 0.002, and this process has been repeated for 5 runs with random initializations. All the models were trained on NVIDIA GeForce GTX 1050, and the results are shown in Table

1. All the additional operations involving the computation of the tensor can be efficiently parallelized and automatically differentiated. Further details can be found in the supplementary material.

4.1 Logistic-embedding layer

The logistic distribution is defined as

(18)

where

is the mean and the variance is given by

. This distribution approximates well a gaussian distribution, but it has the considerable advantage of having a cumulative distribution which can be expressed by means of elementary functions. Specifically, its cumulative distribution is the logistic function, which is written as

(19)

We use this density function to build a layer for processing images.

For the sake of simplicity, let us consider single-channel input images , where we indicate the element of the image with , . We define the set of densities , as

(20)

where are parameters to be learned by gradient descent. Therefore, is a set of two-dimensional density functions obtained by the product of two logistic distributions with different and . For every density, and are the following functions of learnable parameters:

(21)

Although we could directly learn and for every density, we actually learn and to restrict and on the interval, i.e. on the input’s domain. Note that we used a logistic function to express parameters, but any bounded function can be employed. Therefore, every density function is described by four parameters, and we have a total of parameters. According to the methodology described in Section 2.2, we obtain , where is a linear function of the input, and we call it logistic-embedding layer.

Every logistic-embedding layer learns a set of logistic distributions, which are used to apply a pooling on the input image. The result is a

filter, where every pixel represent the expected value with respect to one of the distributions. The filter is then flattened and fed as input to a linear classifier. We used four different values of

(3, 5, 8, 15) and compared them with a fully connected linear layer (linear classifier). For , the logistic-EL reaches test error, against of the fully connected, but it saves more than half of the parameters. Moreover, for it exceeds of test accuracy with almost one tenth of the parameters of the fully connected layer.

4.2 Learning the density parameters by microNN

In this section, we consider the same set of logistic densities of the last example, but rather than expressing their mean as a logistic function of parameter , we use a linear micro network (mNN) to force a dependency on the input. Therefore, we obtain a density-embedding layer where the receptive field adapts to the given input by means of a smaller network , i.e.

(22)

where indicates the parameters of the micro network. As already shown, the performance of the logistic-embedding layer is comparable to the fully connected one, but it is significantly more parsimonious with respect to the number of parameters. For this reason, we utilized the layer described in the previous paragraph to represent the micro network . The Logistic-EL used as micro network makes use of density functions for computing . The outer Logistic-EL will use the output of the micro network as parameters of its density functions, determining the final output.

Notice that, even if we compute by means of a linear function, the full layer is not linear with respect to the input. In fact, the output of the Logistic-EL is a nonlinear function of its parameters and . Since depends on the input, we are actually applying a nonlinear transformation to . For this reason, we compared this model with a fully connected network (FCN) with one hidden layer of neurons. We used three different values for (6, 8, 10), and was chosen equal to . The Logistic-LE with micro network, for and , performs slightly better than the FCN, with a sensible reduction in the number of parameters ( against ). The results are displayed in Table 1, while Figure 1 visually depicts the receptive fields for the logistic-embedding layer.

Model Error ()( runs) # parameters
FC (no hidden layer)
FC ( hidden layer, neurons)
Logistic-EL ()
Logistic-EL ()
Logistic-EL ()
Logistic-EL ()
Logistic-EL with mNN (, )
Logistic-EL with mNN (, )
Logistic-EL with mNN (, )
Table 1: Performance comparison on MNIST
(a)
(b)
(c)
(b)
Figure 1: Visualization of the receptive field before and after training. Every receptive field represents the sum of the density functions over the input domain. (a) Receptive fields of a logistic-EL with micro network (, ) before (center) and after (right) training for three different inputs (left). The micro network makes the receptive field to adapt to the input. (b) and (c) show the receptive fields of a logistic-EL before (left) and after (right) training respectively for and . They are fixed, since their parameters do not depend on the input.

5 Conclusions

In this work we proposed a novel general framework for defining a broad category of layers of neurons by explicitly representing the receptive field with a set of density functions. We have shown that these density functions can be selected and parameterized flexibly, under the only condition that their primitive can be expressed by means of elementary functions. Moreover, they are able to depend on the input in nontrivial ways. We have shown how our approach recovers the fully connected and the convolutional layers as particular cases, and we have developed further examples to show how adaptive differentiable layers can be naturally described.

Finally, we have developed two variants of a density-embedding layer based on the logistic distributions, and we have demonstrated how they are able to learn receptive fields which effectively leverage on the input properties and allow to significantly reduce the number of parameters.

It is important to mention that the logistic-embedding layer is one of many possible density-embedding layers which deserve to be explored. The value of this framework lies in the way it allows to directly shape the receptive field of artificial neurons. The receptive field determines what information about the input is forwarded and elaborated, and selecting it cleverly is crucial for generalization and memory efficiency. We believe that this methodology is a convenient tool for studying new adaptive layers, and it represents a potential candidate as a theoretical framework for analytically comparing the properties of a rich family of transformations. Future work involves the exploration of different sets of densities, and a broader experimental analysis and validation of the properties of different receptive fields.

Broader Impact

As our proposals consists in a theoretical framework, we believe that the impact of our work on social and ethical aspects can only be indirect. Within this framework, we can develop layers which significantly reduce the number of parameters required in the fully connected layer. Through further investigations, we hope to be able to derive efficient and scalable models to be used in a broad spectrum of problems. These applications can have an impact on social and ethical issues.

References

  • [1] Alessio Ansuini, Alessandro Laio, Jakob H. Macke, and Davide Zoccolan. Intrinsic dimension of data representations in deep neural networks. In NeurIPS, 2019.
  • [2] Y-Lan Boureau, Jean Ponce, and Yann LeCun. A theoretical analysis of feature pooling in visual recognition. In ICML, 2010.
  • [3] Shizhong Han, Zibo Meng, James T. O’reilly, Jie Cai, Xiaofeng Wang, and Yan Tong. Optimizing filter size in convolutional neural networks for facial action unit recognition.

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , pages 5070–5078, 2018.
  • [4] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew M Botvinick.

    Neuroscience-inspired artificial intelligence.

    Neuron, 95:245–258, 2017.
  • [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.

    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.

    2015 IEEE International Conference on Computer Vision (ICCV), pages 1026–1034, 2015.
  • [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  • [7] David H. Hubel and Torsten N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of physiology, 160:106–54, 1962.
  • [8] Takumi Kobayashi. Gaussian-based pooling for convolutional neural networks. In NeurIPS, 2019.
  • [9] Takumi Kobayashi. Global feature guided local pooling. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3364–3373, 2019.
  • [10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [11] Hung Le and Ali Borji. What are the receptive, effective receptive, and projective fields of neurons in convolutional neural networks? ArXiv, abs/1705.07049, 2017.
  • [12] Yann LeCun, Koray Kavukcuoglu, and Clément Farabet. Convolutional networks and applications in vision. Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 253–256, 2010.
  • [13] Chen-Yu Lee, Patrick Gallagher, and Zhuowen Tu. Generalizing pooling functions in cnns: Mixed, gated, and tree. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40:863–875, 2018.
  • [14] Yegang Lee, Heechul Jung, Dongyoon Han, Kyungsu Kim, and Junmo Kim. Learning receptive field size by learning filter size. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1203–1212, 2019.
  • [15] Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard S. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In NIPS, 2016.
  • [16] Maximilian Riesenhuber and Tomaso A. Poggio. Just one view: Invariances in inferotemporal cell tuning. In NIPS, 1997.
  • [17] Faraz Saeedan, Nicolas Weber, Michael Goesele, and Stefan Roth. Detail-preserving pooling in deep networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9108–9116, 2018.
  • [18] Thomas Serre and Tomaso A. Poggio. A neuromorphic approach to computer vision. Commun. ACM, 53:54–61, 2010.
  • [19] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–9, 2015.
  • [20] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019.
  • [21] Jumpei Ukita, Takashi Yoshida, and Kenichi Ohki. Characterization of nonlinear receptive fields of visual neurons by convolutional neural network. bioRxiv, 2018.
  • [22] Qi Yan, Yajing Zheng, Shanshan Jia, Yichen Zhang, Zhaofei Yu, Feng Chen, Yonghong Tian, Tiejun Huang, and Jian K. Liu.

    Revealing fine structures of the retinal receptive field by deep learning networks.

    IEEE transactions on cybernetics, 2020.

Appendix A Appendix

a.1 Adaptive pooling

Let us consider a set of intervals over the input’s domain , , where is a generic set of parameters describing arbitrary interval features. Given the input signal , we define the receptive field densities as

(23)

Hereafter, we show the details of the integration of on the interval , where is a natural number indicating the dimension of the input space.

Let us first consider an arbitrary interval .

(24)

Since the integration domain is and , we get:

(25)

Since is a subset of , the last expression can be rewritten as

(26)

Therefore,

(27)

By using this relation, we can easily compute for the adaptive pooling scenario described in section 3.4:

(28)

where .

a.2 The expression of for the logistic-embedding layer

Let us consider input images , where we indicate the element of the image with , . As described in Section 4.1, the se of densities , are

(29)

where . Even if and could be learned directly, to assure that they are both bounded in the interval

we learn their logits

and . More specifically, we computed and as

(30)

We set and .

The function is the expression of the logistic distribution, and its cumulative distribution is the logistic function :

(31)

We compute as shown in section 2.2:

(32)

where and . The computation of both and is trivial, since we have the primitive of :

(33)

To generalize this expression to the case of images with channels, we can either choose to learn a different receptive field for every channel or share the same receptive field for all the channels. The former case is obtained by define a set of densities for every channel :

(34)

so that we obtain a tensor and we compute the receptive field on as . The other option is to share the same receptive fields for all the channels, where we can compute as described above, and the receptive field on is simply given by .

We highlight that performing tensor-tensor multiplication in PyTorch is extremely straightforward, thanks to the method torch.einsum().

a.3 Initialization of the logistic-embedding layer

The parameters of every logistic-embedding layer, i.e. the logits and

, have been initialized according to the normal distributions

and respectively.

a.4 Training times comparison

We measured the average time for training each model for one epoch on MNIST dataset. Means and standard deviations have been computed on ten runs, under the experimental conditions described in section 4. The results are reported on Figure 

2.

Figure 2: Training times comparison. Mean and standard deviation have been computed on 10 runs.

a.5 Experiment on CIFAR10

We collected results on CIFAR10 dataset by means of the same models and training setting described in section 4. The results are reported in Table 2.

Model Error ()( runs) # parameters
FC (no hidden layer)
FC ( hidden layer, neurons)
Logistic-EL ()
Logistic-EL ()
Logistic-EL ()
Logistic-EL ()
Logistic-EL with mNN (, )
Logistic-EL with mNN (, )
Logistic-EL with mNN (, )
Table 2: Performance comparison on CIFAR10.