The conventional way of designing a Convolutional Neural Network (CNN) is to put convolutional, pooling and batch normalization layers one after another with some nonlinearity functions in between. With the invention of residual connections, deeper CNN architectures started to be trained without overfitting and they achieved better results compared to shallow ones. Consequently, the primary trend for solving major visual recognition tasks has become building deeper and larger CNNs [10, 4, 15]. The best performing CNNs usually have hundreds of layers and millions of parameters. However, deeper architectures also mean more parameters which makes the designed architecture not appropriate for embedded and mobile devices. But the question always comes to mind: ”Do we really need all these layers and parameters to achieve a better performance?”. Fig. 2 shows the learned convolutional kernels in the first convolutional layer of AlexNet . It is obvious to notice that some of the kernels are very similar hence redundant. Therefore, instead of introducing all these similar kernels separately, they can simply be reused.
In this paper, we propose a network architecture, Layer Reuse Network (LruNet), where we have reused some convolutional layers repeatedly. Instead of stacking convolutional layers one after another, as in Fig. 1 (a), we feed the output of the convolutional blocks to itself for a given N times before passing the output to the next layer, as in Fig. 1 (b). While doing this, we apply channel shuffling in order to ensure feeding the outputs of convolution filters as inputs to other filters in the same block. Layer reuse (LRU) brings several advantages to the system: (i) The number of parameters in the designed architecture drops considerably since we are reusing layers instead of inserting additional new ones, (ii) the Memory Access Cost (MAC) can be reduced since the computing device can load the reused layer’s parameters only once, (iii) convolutional filters get gradient update form all reuse operations, and (iv) the number of nonlinearities increases as LRU increases.
The LruNet architecture is constructed following the guidelines of recent resource efficient CNN architectures [7, 13, 16, 5, 14, 19, 12, 3]. These architectures are built mostly using group convolutions and depthwise separable convolutions. Group convolutions are first introduced in AlexNet  and effectively used in ResNeXt . Depthwise separable convolutions are proposed in Xception  and they are the main building blocks for recent lightweight architectures.
Some other architectures are constructed facilitating better gradient update mechanism to get improved performance [2, 6, 12]. Those works concentrate on how to feed the output of a layer as input to the next layers as efficiently as possible. In contrast, we reuse the layers multiple times, and as we increase the number-of-reuse (N), convolutional filters also get more gradient updates.
2.1 Layer Reuse
Layer Reuse, referred as LRU, is the concept of using a convolutional layer or block multiple times at multiple places of a CNN architecture. However, the idea of parameter reuse in CNNs should not be limited at layer level. Convolutional layers can be split into smaller chunks up to filters, and reuse can be applied on these chunks throughout the whole network. In this case, it should be addressed as filter reuse (FRU) or kernel reuse (KRU). We believe parameter reuse concept will open a new era for deep learning practitioners for designing new CNN architectures.
LRU block used in the LruNet network architecture is depicted in Fig. 3 where we used group convolutions and depthwise separable convolutions in order to keep the number of network parameters as small as possible. We first use depthwise convolutions increasing the channel number from F to 2F. Then, we applied pointwise group convolutions with 8 groups. We have tried different group numbers here, but 8 groups experimentally proved to be the best in terms of accuracy and number of parameters. We have applied Batch Normalization (BN)  after these two layers, and ReLU after the shortcut connection. Finally, channel shuffle is applied at the end of the LRU block in order to feed different channels to different filters at each reuse. Channel shuffle is implemented as in , with little modification. In the original implementation, the first and the last channel of the feature volume always remain same. So, we switched the first half of the input volume with the second half before applying shuffling. We have not applied channel shuffle at the very last reuse since there is no need.
2.2 Network Architecture
Layer / Stride
|LRU Block, F:64||N||64x16x16|
|LRU Block, F:128||N||128x8x8|
|LRU Block, F:256||N||256x4x4|
|LRU Block, F:512||N||512x4x4|
The complete LruNet network architecture is given in Table 1. We have built different LruNet architectures varying the number-of-reuse N. Similar to , width multiplier can also be applied to scale the number of filters. The network architecture in Table 1 is denoted as LruNet-1x. We use N-LruNet-x to denote network architecture with N layer reuse and the number of filters in LruNet-1x are scaled by .
Although we reuse the convolutional layers repeatedly, we need to use a new BN layer for every reuse since the output feature volume has a different data distribution. Therefore, the number of parameters increases slightly for increased LRU due to the newly introduced BN layers. It must be also noted that the complexity and inference time of the network linearly depends on the number-of-reuse N.
2.3 Training Details
All the models are trained from scratch. We use stochastic gradient descent (SGD) with mini-batch of 256, and applied categorical cross-entropy loss. For the momentum and weight decay, 0.9 and 5x10are used, respectively.
For regularization, several techniques are used to reduce overfitting. Firstly, weight decay (=5x10) is applied on all the parameters of the network. Secondly, we used dropout before the last pointwise convolution with dropout ratio of 0.5. Lastly, we applied several data augmentation techniques: (a)
Random cropping (padding=4),(b) random spatial rotation (10), and (c) random horizontal flipping.
The proposed approach is evaluated for image classification task on three publicly available datasets: CIFAR-10, CIFAR-100 and Fashion-MNIST datasets. Each experiment is repeated 5 times in order to obtain more robust results due to the random initialization of the network parameters.
3.1 Results Using CIFAR-10 Dataset
|8-LruNet-1x (without shuffling)||86.53|
|8-LruNet-1x (with shuffling)||88.45|
|14-LruNet-1x (without shuffling)||86.74|
|14-LruNet-1x (with shuffling)||89.34|
|67-depth Network ( 8-LruNet-1x)||172k||88.45|
|67-depth Network (with new layers)||902k||90.27|
|115-depth Network ( 14-LruNet-1x)||206k||89.34|
|115-depth Network (with new layers)||1562k||90.93|
The CIFAR-10 
is a fundamental dataset in computer vision containing 50k training and 10k testing images in 10 classes with image resolution of 32x32. Initially we investigate the effect of LRU on the performance. Results in Table2 show that acquired accuracy increases as we increase the applied LRU until 14-LRU. Afterwards, the performance does not improve with further reusing. As it has been mentioned earlier, the computational complexity (floating point operations - FLOPs) depends on the number-of-reuse N linearly. 14-LruNet-1x achieves 5.14% better classification accuracy than 1-LruNet-1x.
Secondly, we investigate the effect of channel shuffle. Results in Table 3 show that channel shuffle plays an important role in layer reuse. For both 8-LruNet-1x and 14-LruNet-1x cases, networks with channel shuffle perform better compared to the networks without channel shuffle. It is also interesting to note that even without channel shuffle, networks perform better than 1-LruNet-1x.
Lastly, Table 4 compares the LruNet with the networks having same depth containing all newly introduced layers. Although networks with newly introduced layers achieves comparatively better results, networks with LRU have 5.24 and 7.58 times less parameters for 67-depth and 115-depth networks, respectively.
3.2 Results Using CIFAR-100 Dataset
CIFAR-100  is very similar to the CIFAR-10, except it has 100 classes containing 600 images each. Since it is a more challenging task, we have increased the dropout rate to 0.7 and used width multiplier of 2. We again analyzed the effect of LRU on the performance, and 14-LruNet-2x achieves 5.14% better classification accuracy than 1-LruNet-2x.
3.3 Results Using Fashion-MNIST Dataset
Fashion-MNIST  is very similar to MNIST , but contains images of various articles of clothing and accessories. There are 50k training and 10k testing images in grayscale for 10 classes with image resolution of 28x28. 10-LruNet-1x achieves 2.29% better classification accuracy than 1-LruNet-1x. This performance improvement is relatively small compared to the performance improvements using CIFAR-10 and CIFAR-100 This might be due to the relative simpleness of the Fashion-MNIST dataset compared to the CIFAR-10 and CIFAR-100. 1-LruNet-1x already achieves 91.17% classification accuracy on Fashion-MNIST.
Table 7 shows the comparison of LruNet with the state-of-the-art results. For ShuffleNetv2, channel numbers are adjusted accordingly for width multipliers of 0.25 and 0.75. Results in Table 7 show that LruNet achieves comparatively similar results, although it has much smaller number of convolutional parameters.
This paper proposes a parameter reuse strategy, Layer Reuse (LRU), where convolutional layers of a CNN architecture (LruNet) are used repeatedly. We evaluated the LRU on several publicly available datasets and achieved improved classification performance. LRU especially boosts the CNNs with small number of parameters which is of utmost importance for embedded applications. We believe this work will open up a new research direction for deep learning practitioners for designing novel CNN architectures.
As a future work, we would like to analyze different parameter reuse strategies. It must be noted that parameter reuse is not only restricted to layer level, but we can also apply Filter Reuse (FRU) (or Kernel Reuse (KRU)) for efficient CNN architecture designs.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
-  F. Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, pages 1610–02357, 2017.
-  C. Feichtenhofer, H. Fan, J. Malik, and K. He. Slowfast networks for video recognition. https://arxiv.org/abs/1812.03982, 2018.
-  I. Freeman, L. Roese-Koerner, and A. Kummert. Effnet: An efficient structure for convolutional neural networks. arXiv preprint arXiv:1801.06434, 2018.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, volume 1, page 3, 2017.
-  F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164, 5, 2018.
-  M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525–542. Springer, 2016.
-  M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4510–4520. IEEE, 2018.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
-  J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4820–4828, 2016.
-  H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 5987–5995. IEEE, 2017.
-  X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6848–6856. IEEE, 2018.