I Introduction
In recent years, deep learning solutions have gained popularity in several embedded applications ranging from robotics to autonomous driving[binarydad]
. This holds especially for computer vision based algorithms
[googlenet]. These algorithms are typically demanding in terms of their computational complexity and their memory footprint. Combining these two aspects emphasizes the importance of constraining neural networks in terms of model size and computations for efficient deployment on embedded hardware. The main goal of model compression techniques lies in reducing redundancy, while having the desired optimization target in mind.Handcrafted heuristics are applied in the field of model compression, however, they severely reduce the search space and can result in a poor choice of design parameters
[learn_weights]. Particularly for convolutional neural networks (CNNs) with numerous layers, such as ResNet1K [resnet], handcrafted optimization is prone to result in a suboptimal solution. In contrast, a learningbased policy would, for example, leverage reinforcement learning (RL) to automatically explore the CNN’s design space
[learn_to_prune]. By evaluating a cost function, the RLagent learns to distinguish between good and bad decisions, thereby converging to an effective compression strategy for a given CNN. However, the effort of designing a robust cost function and the time spent for model exploration render learningbased compression policies as a complex method, requiring domain expertise.In this work, we propose the autoencoderbased lowrank filtersharing technique (ALF) which generates a dense, compressed CNN during taskspecific training. ALF uses the inherent properties of sparse autoencoders to compress data in an unsupervised manner. By introducing an information bottleneck, ALFblocks are used to extract the most salient features of a convolutional layer during training, in order to dynamically reduce the dimensionality of the layer’s tensors. To the best of our knowledge, ALF is the first method where autoencoders are used to prune CNNs. After optimization, the resulting model consists of fewer filters in each layer, and can be efficiently deployed on any embedded hardware due to its structural consistency. The contributions of this work are summarized as follows:

Approximation of weight filters of convolutional layers using ALFblocks, consisting of sparse autoencoders.

A two player training scheme allows the model to learn the desired task while slimming the neural architecture.

Comparative analysis of ALF against learning and rulebased compression methods w.r.t. known metrics, as well as layerwise analysis on hardware model estimates.
Ii Related Work
Efforts from both industry and academia have focused on reducing the redundancy, emerging from training deeper and wider network architectures, with the aim of mitigating the challenges of their deployment on edge devices [resource_aware]. Compression techniques such as quantization, lowrank decomposition and pruning can potentially make CNNs more efficient for the deployment on embedded hardware. Quantization aims to reduce the representation redundancy of model parameters and arithmetic [lognet, bnn, orthruspe]
. Quantization and binarization are orthogonal to this work and can be applied in conjunction with the proposed ALF method. In the following sections, we classify works which use lowrank decomposition and pruning techniques into rulebased and learningbased compression.
Iia Rulebased Compression
Rulebased compression techniques are classified as having static or pseudostatic rules, which are followed when compressing a given CNN. Lowrank decomposition or representation techniques have been used to reduce the number of parameters () by creating separable filters across the spatial dimension or reducing crosschannel redundancy in CNNs [accelerating, tucker]
. Handcrafted pruning can be implemented based on heuristics to compute the saliency of a neuron. Han et al.
[learn_weights] determined the saliency of weights based on their magnitude, exposing the superfluous nature of stateoftheart neural networks. Pruning individual weights, referred to as irregular pruning, leads to inefficient memory accesses, making it impractical for general purpose computing platforms. Regularity in pruning becomes an important criterion towards acceleratoraware optimization. Frickenstein et al. [dsc] propose structured, kernelwise magnitude pruning along with a scalable, sparse algorithm. He et al. [geometric_mean_filter]prunes redundant filters using a geometric mean heuristic. Although the filter pruning scheme is useful w.r.t. hardware implementations, it is challenging to remove filters as they directly impact the input channels of the subsequent layer. Rulebased compression techniques overly generalize the problem at hand. Different CNNs vary in complexity, structure and target task, making it hard to set such
onesizefitsall rules when considering the different compression criteria.IiB Learningbased Compression
Recent works such as [learn_to_prune] and [amc] have demonstrated that it is difficult to formalize a rule to prune networks. Instead, they expose the pruning process as an optimization problem to be solved through an RLagent. Through this process, the RLagent learns the criteria for pruning, based on a given cost function.
Huang et al. [learn_to_prune] represent a CNN as an environment for an RLagent. An accuracy term and an efficiency term are combined to formulate a nondifferentiable policy to train the agent. The target is to maximize the two contrary objectives. Balancing the terms can eventually vary for different models and tasks. An agent needs to be trained individually for each layer. Moreover, layers with multiple channels may slow down the convergence of the agent, rendering the model exploration as a slow and greedy process. In the work proposed by He et al. [amc]
, an RLagent prunes channels without finetuning at intermediate stages. This cuts down the time needed for exploration. Layer characteristics such as size, stride and operations (
), serve the agent as input. These techniques require carefully crafted cost functions for the optimization agent. The formulation of the cost functions is nontrivial, requiring expert knowledge and some iterations of trialanderror. Furthermore, deciding what the agent considers as the environment can present another variable to the process, making it challenging to test many configurations of the problem. As mentioned earlier, each neural network and target task combination present a new and unique problem, for which this process needs to be repeated.More recently, Neural Architectural Search (NAS) techniques have been successful in optimizing CNN models at designtime. Combined with HardwareintheLoop (HIL) testing, a much needed synergy between efficient CNN design and the target hardware platform is achieved. Tan et al. [mnas] propose MNASNet, which performs NAS using an RLagent for mobile devices. Cai et al. [proxyless_nas] propose ProxylessNAS, which derives specialized, hardwarespecific CNN architectures from an overparameterized model.
Differently, Guo et al. [dynamic] dynamically prune the CNN by learnable parameters, which have the ability to recover when required. Their scheme incorporates pruning in the training flow, resulting in irregular sparsity. Zhang et al. [structadmm] incorporate a cardinality constraint into the training objective to obtain different pruning regularity. Bagherinezhad et al. [lcnn] propose LookupCNN (LCNN), where the model learns a dictionary of shared filters at training time. During inference, the input is then convolved with the complete dictionary profiting from lower computational complexity. As the same accounts for filtersharing, this weightsharing approach is arguably closest to the technique developed in this work. However, the methodology itself and the training procedure are still fundamentally different from one another.
Method\Advantage  No Pretrained  Learning  No Extensive 

Model  Policy  Exploration  
Rulebased Compression:  
LowRank Dec.[accelerating, tucker]  ✗  ✗  ✗ 
Prune (Handcrafted)[learn_weights, dsc, geometric_mean_filter]  ✗  ✗  ✗ 
Learningbased Compression: 

Prune (RLAgent)[learn_to_prune, amc]  ✗  ✓  ✗ 
NAS[mnas, proxyless_nas]  ✓  ✓  ✗ 
Prune (Automatic)[dynamic, lcnn, structadmm]  ✓  ✓  ✓ 
ALF [Ours]  ✓  ✓  ✓ 
Iii autoencoderbased lowrank filtersharing
The goal of the proposed filtersharing technique is to replace the standard convolution with a more efficient alternative, namely the autoencoderbased lowrank filtersharing (ALF)block. An example of the ALFblock is shown in Fig. 1.
Without loss of generality, is considered as an input feature map to a convolutional layer of an layer CNN, where and indicate the height and width of the input, and is the number of input channels. The weights are the trainable parameters of the layer , where and are the kernel dimensions and the number of output channels respectively.
In detail, the task is to approximate the filter bank in a convolutional layer during training by a lowrank version , where . The lowrank version of the weights is utilized later in the deployment stage for an embeddedfriendly application.
In contrast to previous structured pruning approaches [lcnn, dsc, geometric_mean_filter], this method does not intend to alter the structure of the model in a way which results in a changed dimensionality of the output feature maps , where and indicate the height and width of the output. This is done by introducing an additional expansion layer [squeezenet]. The advantages are twofold. First, each layer can be trained individually without affecting the other layers. Second, it simplifies the endtoend training and allows comparison of the learned features.
The expansion layer is comprised of pointwise convolutions with weights , for mapping the intermediate feature maps after an ALFblock , to the output feature map , as expressed in Eq. 1.
(1) 
As the pointwise convolution introduces a certain overhead with regard to operations and weights, it is necessary to analyze the resource demands of the ALFblock compared to the standard convolution and ensure , where denotes the number of filters which have to be removed to attain an efficiency improvement, see Eq. 2.
(2) 
Iiia Autoencoderbased Lowrank Filtersharing Block
As stated before, the autoencoder is required to identify correlations in the original weights and to derive a compressed representation from them. The autoencoder is only required in the training stage and is discarded in the deployment stage.
Referring back to Fig. 1, the autoencoder setup including the pruning mask is illustrated. According to the design of an autoencoder, Eq. 3 gives the complete expression for calculating the compressed weights . The encoder performs a matrix multiplication between the input and the encoder filters . zeroizes elements of and
refers to a nonlinear activation function, i.e.
. Different configurations of , and initialization schemes are studied in Sec. IVA.(3) 
Eq. 4 provides the corresponding formula for the reconstructed filters of the decoding stage. The symbol stands for a matrix multiplication and for a Hadamard product respectively. The pruning mask acts as a gate, allowing only the most salient filters to appear as nonzero values in , in the same manner as sparse autoencoders. The decoder must, therefore, learn to compensate for the zeroized filters to recover a close approximate of the input filter bank.
(4) 
In order to dynamically select the most salient filters, an additional trainable parameter, denoted mask , is introduced with its individual elements . By exploiting the sparsityinducing property of L1 regularization, individual values in the mask are driven towards zero during training. Since the optimizer usually reaches values close to zero, but not exactly zero, clipping is performed to zero out values that are below a certain threshold . Further, the clipping function allows the model to recover a channel when required.
IiiB Training Procedure
For understanding the training procedure of ALF, it is important to fullycomprehend the training setup, including the two player game of the CNN and the ALFblocks.
Taskrelated Training:
The weights are automatically compressed by the ALFblocks. Allowing the weights to remain trainable, instead of using fixed filters from a pretrained model, is inspired by binary neural networks (BNNs) [bnn]. The motivation is that weights from a fullprecision CNN might not be equally applicable when used in a BNN and, thus, require further training. Analogously, the weights from a pretrained model might not be the best fit for the instances of
in the filtersharing usecase. Therefore, training these variables is also part of the task optimizer’s job. It’s objective is the minimization of the loss function
, which is the accumulation of the crossentropy loss , of the model’s prediction and the label of an input image , and the weight decay scaling factor multiplied with the L2 regularization loss .It is important to point out that no regularization is applied to the instances of either or . Even though each contributes to a particular convolution operation with a considerable impact on the task loss , the task optimizer can influence this variable only indirectly by updating . As neither of the variables , , or of the appended autoencoder are trained based on the task loss, they introduce a sufficiently high amount of noise, which arguably makes any further form of regularization more harmful than helpful.
In fact, this noise introduced by autoencoder variables does affect the gradient computation for updating variable . As this might hamper the training progress, StraightThroughEstimator (STE) [bnn] is used as a substitute for the gradients of Hadamard product with the pruning mask, as well as for multiplication with the encoder filters. This ensures that the gradients for updating the input filters can propagate through the autoencoder without extraneous influence.
This measure is especially important in case of the Hadamard product, as a significant amount of weights in might be zero. When including this operation in the gradient computation, a correspondingly large proportion would be zeroized as a result, impeding the information flow in the backward pass. Nevertheless, this problem can be resolved by using the STE. In Eq. 5, the gradients for the variables for a particular ALFblock are derived.
(5) 
Autoencoder Training: Each autoencoder is trained individually by a dedicated SGD optimizer, referred to as an autoencoder optimizer. The optimization objective for such an optimizer lies in minimizing the loss function . The reconstruction loss is computed using the MSE metric and can be expressed by . In the field of knowledge distillation [CRD], similar terms are studied. The decoder must learn to compensate for the zeroized filters to recover a close approximate of the input filter. If a lot of values in are zero, a large percentage of filters are pruned. To mitigate this problem, the mask regularization function is multiplied with a scaling factor , which decays with increasing zero fraction in the mask and slows down the pruning rate towards the end of the training. In detail, adopts the pruning sensitivity [learn_weights] of convolutional layers, where is the slope of the sensitivity, is the maximum pruning rate and is the zero fraction, with referring to the number of zero filters in , As a consequence, the regularization effect decreases and fewer filters are zeroized in the code eventually. In other words, the task of is to imitate by by learning , and while steadily tries to prune further channels.
The autoencoder optimizer updates the variables , and , based on the loss derived from a forward pass through the autoencoder network. Since and are intermediate feature maps of the autoencoder, they are not updated by the optimizer. While the gradient calculation for the encoder and decoder weights is straight forward, the gradients for updating the mask require special handling. The main difficulty lies in the clipping function which is nondifferentiable at the data points and . As previously mentioned, for such cases the STE can be used to approximate the gradients. The mathematical derivation for the gradients, corresponding to the variables updated by the autoencoder optimizer, are given in Eq. 6.
(6) 
IiiC Deployment
The utility of the autoencoders is limited to the training process, since the weights are fixed in the deployment stage. The actual number of filters is still the same as at the beginning of the training (). However, the code comprises of a certain amount of filters containing only zerovalued weights which are removed. For the succeeding expansion layer, fewer input channels imply that the associated channels in are not used anymore and can be removed as well (Fig.1 pruned channels (gray)). After postprocessing, the model is densely compressed and is ready for efficient deployment.
Iv Experimental Results
We evaluate the proposed ALF technique on CIFAR10 [CIFAR_10]
and ImageNet
[imagenet_cvpr09] datasets. The 50k train and 10k test images of CIFAR10 are used to respectively train and evaluate ALF. The images have a resolution of pixel. ImageNet consists of Mio. train and 50K validation images with a resolution of () px. If not otherwise mentioned, all hyperparameters specifying the taskrelated training were adopted from the CNN’s base implementation. For ALF, the hyperparameters
and are set.Iva Configuration Space Exploration
Crucial design decisions are investigated in this section. The aforementioned novel training setup includes a number of new parameters, namely the weight initialization scheme for and
, consecutive activation functions and batch normalization layers. For that purpose, Plain20
[resnet] is trained on CIFAR10. Experiments are repeated at least twice to provide results with decent validity (barstretching).Setup 1: The effect of additional expansion layers is studied in Fig. 1(a) taking the initialization (He[he] and Xavier [xavier]), extra nonlinear activations
(i.e. ReLU) and batch normalization (BN) into account. The results suggest that expansion layers can lead to a tangible increase in accuracy. In general, the Xavier initialization yields slightly better results than the He initialization and is chosen for the expansion layer of the ALF blocks. In addition, the incorporation of the
layer seems to not have perceivable advantages. The influence of is confirmed in the next experiment.Setup 2: In this setup, the ALF block’s pruning mask is not applied, thus, no filters are pruned. This is applicable to select a weight initialization scheme for and (referred as ) and an activation function . In case no activation function (blue) is applied to , the accuracy is higher than with ReLU layers. Based on the results the Xavier initialization is selected for . Moreover, outperforms other nonlinear activation functions . As the pruning mask is not active, the regularization property of the autoencoder is absent causing a noticeable accuracy drop.
Setup 3: Five variants of ALF, resulting in different sparsity rates, are explored in Fig 1(c) and compared to the uncompressed Plain20 (90.5% accuracy). The first three variants differ in terms of the threshold while the learning rate . We observe that the pruning gets more aggressive when the threshold is increased. The number of nonzero filters remaining are , and respectively (see green, pink and blue curves). We select the threshold as a tradeoff choice between sparsity and accuracy. The behaviour of ALF is also accessed by changing the learning rate of the autoencoder. The learning rate is explored in the consecutive variants (see turquoise and purple curves). The number of remaining nonzero filters increases as there are less updates to the sparsity mask . In case of (purple), the resulting network has less number of nonzero filters with a significant accuracy drop. However, considering the tradeoff between accuracy and pruning rate, we choose the learning rate for the autoencoder.
Comments
There are no comments yet.