In recent years, deep learning solutions have gained popularity in several embedded applications ranging from robotics to autonomous driving[binarydad]
. This holds especially for computer vision based algorithms[googlenet]. These algorithms are typically demanding in terms of their computational complexity and their memory footprint. Combining these two aspects emphasizes the importance of constraining neural networks in terms of model size and computations for efficient deployment on embedded hardware. The main goal of model compression techniques lies in reducing redundancy, while having the desired optimization target in mind.
Hand-crafted heuristics are applied in the field of model compression, however, they severely reduce the search space and can result in a poor choice of design parameters[learn_weights]. Particularly for convolutional neural networks (CNNs) with numerous layers, such as ResNet1K [resnet]
, hand-crafted optimization is prone to result in a sub-optimal solution. In contrast, a learning-based policy would, for example, leverage reinforcement learning (RL) to automatically explore the CNN’s design space[learn_to_prune]. By evaluating a cost function, the RL-agent learns to distinguish between good and bad decisions, thereby converging to an effective compression strategy for a given CNN. However, the effort of designing a robust cost function and the time spent for model exploration render learning-based compression policies as a complex method, requiring domain expertise.
In this work, we propose the autoencoder-based low-rank filter-sharing technique (ALF) which generates a dense, compressed CNN during task-specific training. ALF uses the inherent properties of sparse autoencoders to compress data in an unsupervised manner. By introducing an information bottleneck, ALF-blocks are used to extract the most salient features of a convolutional layer during training, in order to dynamically reduce the dimensionality of the layer’s tensors. To the best of our knowledge, ALF is the first method where autoencoders are used to prune CNNs. After optimization, the resulting model consists of fewer filters in each layer, and can be efficiently deployed on any embedded hardware due to its structural consistency. The contributions of this work are summarized as follows:
Approximation of weight filters of convolutional layers using ALF-blocks, consisting of sparse autoencoders.
A two player training scheme allows the model to learn the desired task while slimming the neural architecture.
Comparative analysis of ALF against learning and rule-based compression methods w.r.t. known metrics, as well as layer-wise analysis on hardware model estimates.
Ii Related Work
Efforts from both industry and academia have focused on reducing the redundancy, emerging from training deeper and wider network architectures, with the aim of mitigating the challenges of their deployment on edge devices [resource_aware]. Compression techniques such as quantization, low-rank decomposition and pruning can potentially make CNNs more efficient for the deployment on embedded hardware. Quantization aims to reduce the representation redundancy of model parameters and arithmetic [lognet, bnn, orthruspe]
. Quantization and binarization are orthogonal to this work and can be applied in conjunction with the proposed ALF method. In the following sections, we classify works which use low-rank decomposition and pruning techniques into rule-based and learning-based compression.
Ii-a Rule-based Compression
Rule-based compression techniques are classified as having static or pseudo-static rules, which are followed when compressing a given CNN. Low-rank decomposition or representation techniques have been used to reduce the number of parameters () by creating separable filters across the spatial dimension or reducing cross-channel redundancy in CNNs [accelerating, tucker]
. Hand-crafted pruning can be implemented based on heuristics to compute the saliency of a neuron. Han et al.[learn_weights] determined the saliency of weights based on their magnitude, exposing the superfluous nature of state-of-the-art neural networks. Pruning individual weights, referred to as irregular pruning, leads to inefficient memory accesses, making it impractical for general purpose computing platforms. Regularity in pruning becomes an important criterion towards accelerator-aware optimization. Frickenstein et al. [dsc] propose structured, kernel-wise magnitude pruning along with a scalable, sparse algorithm. He et al. [geometric_mean_filter]
prunes redundant filters using a geometric mean heuristic. Although the filter pruning scheme is useful w.r.t. hardware implementations, it is challenging to remove filters as they directly impact the input channels of the subsequent layer. Rule-based compression techniques overly generalize the problem at hand. Different CNNs vary in complexity, structure and target task, making it hard to set suchone-size-fits-all rules when considering the different compression criteria.
Ii-B Learning-based Compression
Recent works such as [learn_to_prune] and [amc] have demonstrated that it is difficult to formalize a rule to prune networks. Instead, they expose the pruning process as an optimization problem to be solved through an RL-agent. Through this process, the RL-agent learns the criteria for pruning, based on a given cost function.
Huang et al. [learn_to_prune] represent a CNN as an environment for an RL-agent. An accuracy term and an efficiency term are combined to formulate a non-differentiable policy to train the agent. The target is to maximize the two contrary objectives. Balancing the terms can eventually vary for different models and tasks. An agent needs to be trained individually for each layer. Moreover, layers with multiple channels may slow down the convergence of the agent, rendering the model exploration as a slow and greedy process. In the work proposed by He et al. [amc]
, an RL-agent prunes channels without fine-tuning at intermediate stages. This cuts down the time needed for exploration. Layer characteristics such as size, stride and operations (), serve the agent as input. These techniques require carefully crafted cost functions for the optimization agent. The formulation of the cost functions is non-trivial, requiring expert knowledge and some iterations of trial-and-error. Furthermore, deciding what the agent considers as the environment can present another variable to the process, making it challenging to test many configurations of the problem. As mentioned earlier, each neural network and target task combination present a new and unique problem, for which this process needs to be repeated.
More recently, Neural Architectural Search (NAS) techniques have been successful in optimizing CNN models at design-time. Combined with Hardware-in-the-Loop (HIL) testing, a much needed synergy between efficient CNN design and the target hardware platform is achieved. Tan et al. [mnas] propose MNAS-Net, which performs NAS using an RL-agent for mobile devices. Cai et al. [proxyless_nas] propose ProxylessNAS, which derives specialized, hardware-specific CNN architectures from an over-parameterized model.
Differently, Guo et al. [dynamic] dynamically prune the CNN by learnable parameters, which have the ability to recover when required. Their scheme incorporates pruning in the training flow, resulting in irregular sparsity. Zhang et al. [structadmm] incorporate a cardinality constraint into the training objective to obtain different pruning regularity. Bagherinezhad et al. [lcnn] propose Lookup-CNN (LCNN), where the model learns a dictionary of shared filters at training time. During inference, the input is then convolved with the complete dictionary profiting from lower computational complexity. As the same accounts for filter-sharing, this weight-sharing approach is arguably closest to the technique developed in this work. However, the methodology itself and the training procedure are still fundamentally different from one another.
|Method\Advantage||No Pre-trained||Learning||No Extensive|
|Low-Rank Dec.[accelerating, tucker]||✗||✗||✗|
|Prune (Handcrafted)[learn_weights, dsc, geometric_mean_filter]||✗||✗||✗|
|Prune (RL-Agent)[learn_to_prune, amc]||✗||✓||✗|
|Prune (Automatic)[dynamic, lcnn, structadmm]||✓||✓||✓|
Iii autoencoder-based low-rank filter-sharing
The goal of the proposed filter-sharing technique is to replace the standard convolution with a more efficient alternative, namely the autoencoder-based low-rank filter-sharing (ALF)-block. An example of the ALF-block is shown in Fig. 1.
Without loss of generality, is considered as an input feature map to a convolutional layer of an -layer CNN, where and indicate the height and width of the input, and is the number of input channels. The weights are the trainable parameters of the layer , where and are the kernel dimensions and the number of output channels respectively.
In detail, the task is to approximate the filter bank in a convolutional layer during training by a low-rank version , where . The low-rank version of the weights is utilized later in the deployment stage for an embedded-friendly application.
In contrast to previous structured pruning approaches [lcnn, dsc, geometric_mean_filter], this method does not intend to alter the structure of the model in a way which results in a changed dimensionality of the output feature maps , where and indicate the height and width of the output. This is done by introducing an additional expansion layer [squeezenet]. The advantages are twofold. First, each layer can be trained individually without affecting the other layers. Second, it simplifies the end-to-end training and allows comparison of the learned features.
The expansion layer is comprised of point-wise convolutions with weights , for mapping the intermediate feature maps after an ALF-block , to the output feature map , as expressed in Eq. 1.
As the point-wise convolution introduces a certain overhead with regard to operations and weights, it is necessary to analyze the resource demands of the ALF-block compared to the standard convolution and ensure , where denotes the number of filters which have to be removed to attain an efficiency improvement, see Eq. 2.
Iii-a Autoencoder-based Low-rank Filter-sharing Block
As stated before, the autoencoder is required to identify correlations in the original weights and to derive a compressed representation from them. The autoencoder is only required in the training stage and is discarded in the deployment stage.
Referring back to Fig. 1, the autoencoder setup including the pruning mask is illustrated. According to the design of an autoencoder, Eq. 3 gives the complete expression for calculating the compressed weights . The encoder performs a matrix multiplication between the input and the encoder filters . zeroizes elements of and
refers to a non-linear activation function, i.e.. Different configurations of , and initialization schemes are studied in Sec. IV-A.
Eq. 4 provides the corresponding formula for the reconstructed filters of the decoding stage. The symbol stands for a matrix multiplication and for a Hadamard product respectively. The pruning mask acts as a gate, allowing only the most salient filters to appear as non-zero values in , in the same manner as sparse autoencoders. The decoder must, therefore, learn to compensate for the zeroized filters to recover a close approximate of the input filter bank.
In order to dynamically select the most salient filters, an additional trainable parameter, denoted mask , is introduced with its individual elements . By exploiting the sparsity-inducing property of L1 regularization, individual values in the mask are driven towards zero during training. Since the optimizer usually reaches values close to zero, but not exactly zero, clipping is performed to zero out values that are below a certain threshold . Further, the clipping function allows the model to recover a channel when required.
Iii-B Training Procedure
For understanding the training procedure of ALF, it is important to fully-comprehend the training setup, including the two player game of the CNN and the ALF-blocks.
Task-related Training: The weights are automatically compressed by the ALF-blocks. Allowing the weights to remain trainable, instead of using fixed filters from a pre-trained model, is inspired by binary neural networks (BNNs) [bnn]. The motivation is that weights from a full-precision CNN might not be equally applicable when used in a BNN and, thus, require further training. Analogously, the weights from a pre-trained model might not be the best fit for the instances of
in the filter-sharing use-case. Therefore, training these variables is also part of the task optimizer’s job. It’s objective is the minimization of the loss function, which is the accumulation of the cross-entropy loss , of the model’s prediction and the label of an input image , and the weight decay scaling factor multiplied with the L2 regularization loss .
It is important to point out that no regularization is applied to the instances of either or . Even though each contributes to a particular convolution operation with a considerable impact on the task loss , the task optimizer can influence this variable only indirectly by updating . As neither of the variables , , or of the appended autoencoder are trained based on the task loss, they introduce a sufficiently high amount of noise, which arguably makes any further form of regularization more harmful than helpful.
In fact, this noise introduced by autoencoder variables does affect the gradient computation for updating variable . As this might hamper the training progress, Straight-Through-Estimator (STE) [bnn] is used as a substitute for the gradients of Hadamard product with the pruning mask, as well as for multiplication with the encoder filters. This ensures that the gradients for updating the input filters can propagate through the autoencoder without extraneous influence.
This measure is especially important in case of the Hadamard product, as a significant amount of weights in might be zero. When including this operation in the gradient computation, a correspondingly large proportion would be zeroized as a result, impeding the information flow in the backward pass. Nevertheless, this problem can be resolved by using the STE. In Eq. 5, the gradients for the variables for a particular ALF-block are derived.
Autoencoder Training: Each autoencoder is trained individually by a dedicated SGD optimizer, referred to as an autoencoder optimizer. The optimization objective for such an optimizer lies in minimizing the loss function . The reconstruction loss is computed using the MSE metric and can be expressed by . In the field of knowledge distillation [CRD], similar terms are studied. The decoder must learn to compensate for the zeroized filters to recover a close approximate of the input filter. If a lot of values in are zero, a large percentage of filters are pruned. To mitigate this problem, the mask regularization function is multiplied with a scaling factor , which decays with increasing zero fraction in the mask and slows down the pruning rate towards the end of the training. In detail, adopts the pruning sensitivity [learn_weights] of convolutional layers, where is the slope of the sensitivity, is the maximum pruning rate and is the zero fraction, with referring to the number of zero filters in , As a consequence, the regularization effect decreases and fewer filters are zeroized in the code eventually. In other words, the task of is to imitate by by learning , and while steadily tries to prune further channels.
The autoencoder optimizer updates the variables , and , based on the loss derived from a forward pass through the autoencoder network. Since and are intermediate feature maps of the autoencoder, they are not updated by the optimizer. While the gradient calculation for the encoder and decoder weights is straight forward, the gradients for updating the mask require special handling. The main difficulty lies in the clipping function which is non-differentiable at the data points and . As previously mentioned, for such cases the STE can be used to approximate the gradients. The mathematical derivation for the gradients, corresponding to the variables updated by the autoencoder optimizer, are given in Eq. 6.
The utility of the autoencoders is limited to the training process, since the weights are fixed in the deployment stage. The actual number of filters is still the same as at the beginning of the training (). However, the code comprises of a certain amount of filters containing only zero-valued weights which are removed. For the succeeding expansion layer, fewer input channels imply that the associated channels in are not used anymore and can be removed as well (Fig.1 pruned channels (gray)). After post-processing, the model is densely compressed and is ready for efficient deployment.
Iv Experimental Results
We evaluate the proposed ALF technique on CIFAR-10 [CIFAR_10]
and ImageNet[imagenet_cvpr09] datasets. The 50k train and 10k test images of CIFAR-10 are used to respectively train and evaluate ALF. The images have a resolution of pixel. ImageNet consists of Mio. train and 50K validation images with a resolution of (
) px. If not otherwise mentioned, all hyper-parameters specifying the task-related training were adopted from the CNN’s base implementation. For ALF, the hyperparametersand are set.
Iv-a Configuration Space Exploration
Crucial design decisions are investigated in this section. The aforementioned novel training setup includes a number of new parameters, namely the weight initialization scheme for and
, consecutive activation functions and batch normalization layers. For that purpose, Plain-20[resnet] is trained on CIFAR-10. Experiments are repeated at least twice to provide results with decent validity (bar-stretching).
Setup 1: The effect of additional expansion layers is studied in Fig. 1(a) taking the initialization (He[he] and Xavier [xavier]), extra non-linear activations
(i.e. ReLU) and batch normalization (BN) into account. The results suggest that expansion layers can lead to a tangible increase in accuracy. In general, the Xavier initialization yields slightly better results than the He initialization and is chosen for the expansion layer of the ALF blocks. In addition, the incorporation of thelayer seems to not have perceivable advantages. The influence of is confirmed in the next experiment.
Setup 2: In this setup, the ALF block’s pruning mask is not applied, thus, no filters are pruned. This is applicable to select a weight initialization scheme for and (referred as ) and an activation function . In case no activation function (blue) is applied to , the accuracy is higher than with ReLU layers. Based on the results the Xavier initialization is selected for . Moreover, outperforms other non-linear activation functions . As the pruning mask is not active, the regularization property of the autoencoder is absent causing a noticeable accuracy drop.
Setup 3: Five variants of ALF, resulting in different sparsity rates, are explored in Fig 1(c) and compared to the uncompressed Plain-20 (90.5% accuracy). The first three variants differ in terms of the threshold while the learning rate . We observe that the pruning gets more aggressive when the threshold is increased. The number of non-zero filters remaining are , and respectively (see green, pink and blue curves). We select the threshold as a trade-off choice between sparsity and accuracy. The behaviour of ALF is also accessed by changing the learning rate of the autoencoder. The learning rate is explored in the consecutive variants (see turquoise and purple curves). The number of remaining non-zero filters increases as there are less updates to the sparsity mask . In case of (purple), the resulting network has less number of non-zero filters with a significant accuracy drop. However, considering the trade-off between accuracy and pruning rate, we choose the learning rate for the autoencoder.