Convolutional neural networks (CNNs)  have demonstrated tremendous success in recent years for a large range of applications, particularly for prediction using structured data. Despite such successes, a major challenge with leveraging convolutional neural networks is the sheer number of learnable parameters within such networks, making understanding and gaining insights about them a daunting task. As such, researchers are actively trying to gain better insights and understanding into the representational properties of convolutional neural networks, especially since it can lead to better design and interpretability of such networks.
One direction that holds a lot of promise in improving understanding of convolutional neural networks, but is much less explored than other approaches, is the construction of theoretical models and interpretations of such networks. Current approaches of neural network interpretation include Bayesian probabilistic interpretations  and information theoretic interpretations [25, 19, 18]. Such theoretical models and interpretations could help guide and motivate design decisions of convolutional neural networks.
Motivated by this direction, in this study we take a different approach to interpreting and gaining insight into the nature of convolutional neural networks through the use of finite transformation semigroup theory . To achieve this goal, we introduce an abstract algebraic interpretation of convolutional operations enabled by a novel approach for the interpretation of convolutional layers in convolutional neural networks as finite transformation semigroups. This allows us to study the basic properties of the resulting finite transformation semigroup from an abstract algebraic interpretation to gain insights on the representational properties of convolutional layers.
The remainder of this paper is organized as follows. First, Section 2 reviews convolution, discusses challenges of network analysis, introduces efforts into neural network quantized representation, and defines the notion of finite transformation semigroups. Then, Section 3 proposes a novel technique for interpreting convolutional layers within convolutional neural networks as a finite transformation semigroup. Next, Section 4 analyzes properties of the proposed abstract algebraic interpretation of convolutional neural networks. Section 5 studies the effect of using the proposed abstract algebraic interpretation with different number of states in the finite transformation semigroups to model convolution operations and the associated effects on network performance. Finally, Section 6 summarizes the results and suggests future areas of research.
In this section, we will review convolutions in the context of convolutional neural networks, discuss the challenges of analyzing convolutional neural networks, discuss efforts into neural network quantized representation, and define mathematically the notion of finite transformation semigroups.
2.1 Convolutions in Convolutional Neural Networks
Convolutions are not only a key aspect of convolutional neural networks, but also the major computational work horse of such networks. As such, a better understanding of what they are computing, and how they can be represented and analyzed would allow for a host of interesting insights that can drive better design, ranging from more efficient network architecture designs and representations [13, 1, 9, 7, 17, 22, 24, 21] to new network architectures with greater representational capacity for improved accuracy [6, 26].
Convolutional neural networks are primarily built from a series of stacked convolutional layers, each of which are parameterized by sets of weights contained on some lattice structure. The lattice structure of the weight is dependent on the type of data the convolutional neural network is designed for. Convolution operations can be described by
where is the index of the current layer in a given network, is the index of the input channel being operation on, is the index of the output channel, is the feature map being operated on, is the number of input feature maps, and is the output feature map. Note that a bias term, which is typically added to then entire output channel, is omitted from Equation 1 and is not considered as part of convolutional operations in this work. Other convolution techniques include dilated convolution , depthwise separable convolution , and rotation equivariant convolution .
2.2 Network Analysis
Convolutional neural networks in general are quite difficult to analyze. With an enormous number of trainable parameters it can be extremely difficult to determine what components of an input led to any given prediction, and more difficult still to determine what features a network has learned to detect. Some convolutional neural networks are built for data that can not be easily mapped to a regular lattice. The lack of well-defined regular lattice structures in the data representation increases the difficulty of analyzing its internal behavior. An issue with such convolutional neural networks is that the network structure can only be applied to a specific graph structure that is designed for a particular instantiation of the data being analyzed [16, 2]. Generalized analysis techniques that can be applied to analyze heterogeneous network structures are highly desired as they would allow for greater insights into network behavior.
Some current approaches of neural network interpretation include Bayesian probabilistic interpretations  and information theoretic interpretations [25, 19, 18]. For example, a Bayesian interpretation was explored in 
from the perspective of Kolmogorov’s representation of a multivariate response surface, where operations within a neural network can be seen as a superposition of univariate activation functions applied to an affine transformation of the input variable. In[19, 18], an information theoretical interpretation of neural networks was explored where networks are quantified by the mutual information between layers in the network as well as the input and output variables. Further investigation into alternative interpretations can lead to new insights beyond what these existing interpretations can provide.
2.3 Neural Network Quantized Representation
The precision of data representation of the weights of a convolutional neural network can have a significant impact on the size and computational complexity of inference of a convolutional neural network. While training is typically conducted on convolutional neural networks with floating-point data representations of weights, it has been shown that such convolutional neural networks can still achieve strong inference performance using quantized representations of weights , even when data precision is reduced all the way down to 1 bit . It has also been shown that the more one quantizes the weights of a convolutional neural network, the greater the impact on modeling performance . Quantization of a network can occur after a network has been trained , allowing the quantization to take use-case and hardware considerations into account. Another approach builds network quantization directly into the training process in order to minimize the performance loss incurred from the quantization procedure [1, 5]. As such, having a better analytic understanding into the effect and tradeoffs of various levels of quantized representation can lead to better decisions on network representation design.
2.4 Finite Transformation Semigroups
Convolutional operations in a convolutional neural network can be viewed as mapping information from one state to another. A convolutional neural network is required to learn a variety of operations to perform effectively. Studying the nature of these operations is integral to understand what a network is computing. Formalizing these operations within an algebraic framework would allow theoretical analysis to more easily be applied. Abstract algebra can serve as useful tool in this endeavour.
Part of abstract algebra is the study of associative systems called Finite Transformation Semigroups . A finite transformation semigroup is defined by a finite set of states and a finite semigroup . The finite semigroup is a set of maps (functions) between states in . All state mappings must be closed under composition . The semigroup must satisfy two properties, closure , and associativity
where and . These properties allow a semigroup to be represented using a multiplication table. A semigroup can be generated by a subset of its functions where , and . All elements in are words formed by the generating elements. Note that the size of a semigroup is the number of unique functions, and not the number of generating elements. Two examples of finite transformation semigroups are shown in Figure 1.
Two special types of semigroups are monoids and groups. A monoid is a semigroup with an identity map . An identity maps all states to themselves. All identity maps in a monoid are equivalent. A group is a monoid where all maps are reversible. That is, for every element , a group, there exists a corresponding element such that
where is the inverse of . Groups are often used to study the symmetries of systems .
One type of analysis of a finite transformation semigroup is decomposing it into irreducible components (atomic elements) of simple groups and flip-flops (i.e., two-element right-zero monoids) . Decomposing a semigroup in this fashion allows a coordinate system to be built using these sub-components. One such decomposition is called the holonomy decomposition .
3 Abstract Alegbraic Interpretation of Convolutional Neural Networks
A better understanding of the functions that a convolutional layer learns would aid interpretations of convolutional neural networks and aid with network design. Here, we will introduce an abstract algebraic interpretation of convolutional neural networks via finite transformation semigroup theory. To achieve this goal, we introduce a method for interpreting convolutional layers within convolutional neural networks as finite transformation semigroups to facilitate the proposed abstract algebraic interpretation. The details are described below.
3.1 Finite Space Mapping
Most of the operations performed within a standard neural network are based in floating point arithmetic. To interpret convolution layers as finite transformation semigroups we must first map the floating-point state space that the convolutions acts on to a finite state space. The states that the convolutions act on are defined by the values of the input feature maps. As such, one must first establish a finite space mapping scheme for mapping neural network parameters from floating-point state space to a finite state space.
In this work, the finite space mapping scheme leveraged to map a neural network parameter to a finite space can be described as follows. Let and control the minimum values and maximum values, respectively, allowed in feature map space. All values above or below the range will be clipped to and , respectively. Let and be integer values in the finite space that and are linearly mapped to, respectively. Values mapped from feature map space to finite space are rounded to the nearest integer. The same linear map is used for all convolutional layers in a convolutional neural network. Due to the nature of activation functions, the mean feature map value is often centered on or near the origin. For this reason, and , and and are set to be symmetric about the origin (i.e., , ).
3.2 Finite Feature Map Elements as a State Space
|Current State ()||-r||-r+1||-r+2||-1||0||1||r-2||r-1||r|
To interpret convolutional operations as a finite transformation semigroup we need to identify the generators of the semigroup and the states they act on. One possible option for a state space is to use the entire set of feature maps. This option has two issues: i) the high dimensionality of a single feature map, and ii) the change in feature map dimensionality throughout a given network. The dimensionality of a single feature map (the input to a convolutional layer) has 3 dimensions (height, width, number of channels) and can consist of thousands of values. As an example, let us consider a feature map in a residual convolutional network  with 20 layers (i.e., ResNet-20) trained on the CIFAR-10 dataset . The largest feature map in this network has elements, and the ResNet-20 CIFAR-10 network is already considered to a small network by current standards. As such, using the entire set of feature maps as the state space is computationally impractical. In addition, changing the feature map size would require allowing reshaping
elements in the semigroup, or to zero pad smaller feature maps to match larger feature map sizes, thus greatly increasing the complexity of the interpretation.
Another option for the state space is to use the individual components of the input feature maps. The individual components are single value elements of the feature map matrix. By selecting the state space as such we are modeling the finest scale of functions in the network that are used to construct more complex functions. Modelling these building block functions also allows the changing characteristics throughout a network to be analyzed. For example, in a convolutional neural network designed for visual perception, one can leverage the aforementioned approach to study and analyzed whether the functions that detect lines and edges in the lower levels of a network are similar to the functions that detect object level abstractions near the top level of a network.
Using the first option allows for a better view of how your convolutional neural network is behaving from a input sample perspective. That is, determine what computations are being used to detect feature specific operations. In addition, one could investigate the inter-dependencies between the nature of computation and specific feature patterns. However, this option is simply too computationally expensive. On the other hand, the second option of using the feature map components as the state space allows for fine grained analysis of the functions that a network learns to detect features. In addition, this option is computationally inexpensive, relatively speaking, as each state is a one element vector. For this study, feature map components will be used as the state space of the finite transformation semigroup.
3.3 Convolutions as Semigroup Actions
A single value in an output feature map requires values from the input feature map, where and are the height and width of kernels in layer , respectively. elements come from a feature map, and the other elements come from a convolutional kernel. Note that assumes a regular 2-dimensional lattice, other lattices may be used but are not explored in this work. Mapping values to a single value is in conflict with finite transformation semigroup theory as the functions within the semigroup must map a single state to another state. The convolution operation as a whole maps a combination of states to a single state in a different space. In the proposed abstract algebraic interpretation of convolutional neural networks, the convolution operation will be broken up into sub-convolution operations. These sub-operations are when a single lattice of learned parameter is applied to an subset of features within a feature map channel. The center value in the lattice is taken to be the state for which the semigroup action is being applied. Let this state be denoted as , where is the layer within a network, is the channel index in layer , and is the element index within channel . The remaining elements will define the properties of the action. The convolution operation can be interpreted as a finite linear operation of the form
where is the quantized state being acted on, and are representations of the semigroup action, and bounds the result to plus-minus . In this context, is the state, is a multiplicative action on , and is an additive (or subtractive) action on the result of the first action. In multiplicative semigroup notation, can be written as
where and are actions representing possible quantized multiplication operation and quantized addition operations, respectively. Figure 2 demonstrates an example of mapping a convolution operation to a semigroup action.
3.4 Semigroup Generators
Let represent the proposed finite transformation semigroup where is the state space with states, and is the semigroup acting on . The generators of used to model convolution operations are a set of four base generators required for all , and multiplication generators where is the set of prime multiplications generators in .
The types of generators in the semigroup are shown in Table 1. The increment generator is adding one to all states, the zero generator maps all states to the origin, the identity generator is a do nothing operation, and the negative generator symmetrizes states about the origin. Note that the identity element makes a monoid. In addition, a decrement operation can be formed by . is the multiplication generator for some number . The only required ’s are prime numbers where , . An example automata of is shown in Figure 4. Table 3 shows an example of in multiplication table format.
The proposed family of finite transformation semigroups has a variable number of generators depending on the number of states in . Here, we investigate the size of as a function of . An approximation of is proposed to significantly reduce the number of elements compared to . Then the holonomy decomposition  of is briefly explored.
4.1 Semigroup Size
The semigroup includes an additional generator for every new prime number less than or equal to . Table 2 shows three properties of : the number of generators, the semigroup size, and the number of bits required to represent the number of states in . The number of bits required to represent the states in is . The ’s shown in Table 2 reflect the largest that can be used for the corresponding number of bits. The number of generators required for a given clearly increases with , since all primes less than equal to are required as multiplication generators. The size of increases by approximately two orders of magnitudes with every bit added over the range shown.
4.2 Subsemigroups of
quickly grows as increases. Reducing the size of would be beneficial to reduce the number of computations required during analysis of the semigroup. ’s size increases as increases since the number of states in the system increases linearly with , and more generators are used to construct the semigroup for larger . It is possible to approximate by selecting a subset of the semigroup’s elements. A specific type of subset, called a subsemigroup, may be formed by removing generators from ’s definition. Let be a subsemigroup of (i.e., ) where all prime generators less than or equal to are used. If then .
Figure 3 compares ’s size to its subsemigroups size for and . For larger there is a significant reduction in number of elements. This result is surprising as the subsemigroups only differ by one generator for . The increase in size of diminishes with each additional generator added. The case were is interesting from a practical perspective. When the only types of multiplication actions allowed on the state are , , or . This subsemigroup would then require setting the center of each convolution kernel to either , , or . Notice that by limiting the value of only one element in all convolution kernels to three values the majority of all elements in are wiped out.
4.3 Transformation Semigroup Decomposition
The size of quickly grows as increases. Decomposing its structure into a coordinate system would allow for easier analysis of its structure. The holonomy decomposition  is used to decompose both and for various ’s. The decomposition of produced a surprising result in that the only two types of building blocks of are flip-flops and 2-cycle groups. This property of the decomposition is demonstrated in the generators of . All but two of ’s generators act like or contain many copies of fuse like structures in that the generators, when individually applied multiple times, end up at a trivial cyclic state. Increment eventually sends all states to , zero instantly kills all states, and all multiplication generators eventually send all states to either or depending on initial conditions. The negative generator is responsible for the cyclic group element in the decomposition. Applying it twice ends up at the initial state.
The coordinates generated from the decomposition of are quite complex and quickly grows in depth as increases. For example, the decomposition of produces a cascade semigroup with 6 generators, and has 9 levels with (4, 5, 5, 9, 5, 5, 5, 4, 3) points in each coordinate dimension, respectively. Moreover, the decomposition of results multiple sets of tiles on a single level. Clearly, decomposing larger results in huge coordinate systems.
Decomposition of produces a much cleaner and easier-to-compute deconstruction. For example, decomposition of produces a cascade semigroup with 4 generators, 6 levels with (3, 3, 3, 3, 3, 3) points in each coordinate dimension, respectively. Regardless of ’s value, decomposition of produces this simple coordinate system for any .
To explore the proposed notion of studying convolutional neural networks using an abstract algebraic interpretation via finite transformation semigroup theory, we perform two experiments where we study convolutional neural networks using the proposed interpretation to gain insights into quantized network representation.
5.1 Experiment 1: Effects of and in on Representational Performance
In the first experiment, we constructed a convolutional neural network and construct it into interpretations with different number of states in the finite transformation semigroup. Thus allowing us to study the effect of using interpretations at different precision levels for quantized representations of convolutional operations and the associated effects on representational performance.
The convolutional neural network used in this first experiment is based on the LeNet-5  convolutional neural network architecture, which leverages conventional convolutional layer configurations. Note that is used as the activation function. A test accuracy of 98.1% was achieved on MNIST . The consequences of using interpretations with different number of states (corresponding to different precision levels for the convolutional neural network) on representational performance is studied by varying two parameters, the number of states in as determined by , and the range of numbers kept . Let denote the resulting convolutional neural network associated with with convolution operations constrained by parameters and . The results are shown Table 4. Note that individual convolution operations are operating in finite state space, but when added together may exceed the finite operating bounds.
It can be observed that by using appropriate and , is able to maintain the representational performance of . For the same finite transformation semigroup interpretation , different levels of representational performance are achieved by varying . The more states in (i.e., more bits of precision) the greater performance achieves. However, this trend does not hold for , there is a decrease in performance between and . Further investigation of the performance degradation will be required.
As increases, there is a point at which the representational performance ceases to be random chance. Surprisingly, the increase in representational performance is a quick jump instead of a gradual increase, potentially indicating that specific computational mechanisms learned by the LeNet-5 convolutional neural network have minimal redundancies. In other words, the features that the convolutional neural network is learning are similar in nature and when one feature detector is effected by loss of precision other feature detectors are equally effected.
Notice that for any given and , the bin size resolution can be calculated. For a fixed and increasing the bin size resolution decreases and a corresponding decrease in performance is observed for smaller . The performance decrease indicates that a sufficient bin size resolution (i.e., computational precision) around specific range of values is required to maintain representational performance, which can be leveraged to guide quantized representation design.
For a constant , the performance of the interpretations vary greatly based on . The input to each convolution operation is the output of operation, except for the first convolution operation. At the input to a given convolution is complete in that the range of numbers is not clipped when moving to the quantized domain due to using . When the input is clipped to accommodate the range of states in (i.e., ) the ability to maintain performance in the quantized domain is lost since less information is carried forward.
5.2 Experiment 2: Quantized Interpretation of Residual Convolutional Neural Network
Guided by the observations made in the first experiment, we perform a second experiment where the convolutional neural network used is a residual convolutional neural network  20 layers and preactivation (i.e., ResNet-20), trained on the CIFAR-10 dataset  with an accuracy of 90%. In particular, the proposed method is used to interpret the ResNet-20 network using finite transformation semigroup , thus illustrating that the proposed abstract algebraic interpretation can be viable for studying a variety of convolutional neural network architectures outside of conventional convolutional layer configurations.
Given our observation in the first experiment that using appropriate and for the finite transformation semigroup interpretation could enable to maintain representational performance in a particular quantized representation state, we choose the parameters to be and for and explore the effects on representational performance. It was observed that with the finite transformation semigroup interpretation , where 8 bits are required to represent the states, the resulting has an accuracy of 89% and thus retains the representational performance of for the most part at a lower precision level. As can be seen here, by leveraging a better understanding of design choices through the proposed abstract algebraic interpretation, one can make more informed decisions on representation choices.
In summary, the results of this experiment, along with that of the first experiment, show that important insights can be obtained for guiding network design and representation by studying convolutional neural networks using the proposed abstract algebraic interpretation via finite transformation semigroup theory.
6 Conclusions and Future Work
In this study, we propose an abstract algebraic interpretation using finite transformation semigroup theory for studying and gaining insights into the representational properties of convolutional neural networks. To achieve this goal and construct such an interpretation, convolutional layers are broken up and mapped to a finite space. The state space of the proposed finite transformation semigroup is then defined as a single element within the convolutional layer, with the acting elements defined as a combination of elements that surround the state with elements of a convolution kernel. Generators of the finite transformation semigroup are defined to complete the interpretation. The basic properties of the resulting finite transformation semigroup are then analyzed to gain insights on the representational properties of convolutional neural networks, including insights into quantized network representation. Two experiments conducted in this study show that important insights can be obtained for guiding network design by studying convolutional neural networks using the proposed abstract algebraic interpretation using finite transformation semigroup theory.
A number of directions for future research are apparent for the proposed abstract algebraic interpretation. The first direction is using the proposed interpretation to gain insights into the behaviour of larger, more complex convolutional neural networks beyond the initial experiment performed in this study. With the proposed representation approach, it would be possible to analyze how the distribution of mapping functions change through a convolutional neural network. This may allow more feature specific detection mechanisms to be designed, and may allow better network quantization. The second research direction would address the short comings of the proposed abstract algebraic interpretation. For example, elements generated by the increment generator and the negative generator in the current interpretation represent both the effect of surrounding states and the effect of convolution kernels. Disentangling these actions may provide additional detailed insights beyond what the current interpretation can provide.
-  Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
-  Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
-  Attila Egri-Nagy and Chrystopher L Nehaniv. Computational holonomy decomposition of transformation semigroups. arXiv preprint arXiv:1508.06345, 2015.
-  Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan.
Deep learning with limited numerical precision.
International Conference on Machine Learning, pages 1737–1746, 2015.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In , pages 770–778, 2016.
-  Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  John M Howie. Fundamentals of semigroup theory. London Mathematical Society Monographs. New Series., 1995.
-  B. Jacob et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference. arXiv:1712.05877, 2017.
-  Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research).
The mnist database of handwritten digits.http://yann. lecun. com/exdb/mnist/, 1998.
-  Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In International Conference on Machine Learning, pages 2849–2858, 2016.
-  Nicholas G. Polson and Vadim Sokolov. Deep learning: A bayesian perspective. Bayesian Analysis, 12:1275–1304, 2017.
-  John Rhodes, Chrystopher L Nehaniv, and Morris W Hirsch. Applications of automata theory and algebra: via the mathematical theory of complexity to biology, physics, psychology, philosophy, and games. World Scientific, 2010.
-  Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
-  Mohammad Javad Shafiee, Akshaya Mishra, and Alexander Wong. Deep learning with darwin: evolutionary synthesis of deep neural networks. Neural Processing Letters, 48(1):603–613, 2018.
-  Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
-  Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. arXiv preprint arXiv:1503.02406, 2015.
-  Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In International Conference on Medical image computing and computer-assisted intervention, pages 210–218. Springer, 2018.
-  Alexander Wong, Zhong Qiu Lin, and Brendan Chwyl. Attonets: Compact and efficient deep neural networks for the edge via human-machine collaborative design. arXiv preprint arXiv:1903.07209, 2019.
-  Alexander Wong, Mohammad Javad Shafiee, Brendan Chwyl, and Francis Li. Ferminets: Learning generative machines to generate efficient neural networks via generative synthesis. arXiv preprint arXiv:1809.05989, 2018.
-  Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
-  Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In arXiv:1707.01083, 2017.
-  Tianchen Zhao. Information theoretic interpretation of deep learning. arXiv preprint arXiv:1803.07980, 2018.
-  Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.