Efficient Training of Deep Convolutional Neural Networks by Augmentation in Embedding Space

02/12/2020 ∙ by Mohammad Saeed Abrishami, et al. ∙ Northeastern University University of Southern California clarifai 12

Recent advances in the field of artificial intelligence have been made possible by deep neural networks. In applications where data are scarce, transfer learning and data augmentation techniques are commonly used to improve the generalization of deep learning models. However, fine-tuning a transfer model with data augmentation in the raw input space has a high computational cost to run the full network for every augmented input. This is particularly critical when large models are implemented on embedded devices with limited computational and energy resources. In this work, we propose a method that replaces the augmentation in the raw input space with an approximate one that acts purely in the embedding space. Our experimental results show that the proposed method drastically reduces the computation, while the accuracy of models is negligibly compromised.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep learning is one of the main elements of the recent advances in the field of artificial intelligence. Some of the major factors accelerating the progress of such models can be listed as: significant increase in the amount of available training data, the evolution of computational power of electronic devices, and introducing new learning algorithms and open-source tools 

[13]. The superiority of deep neural networks (DNNs) to other methods was initially presented by setting records in well-known challenging artificial intelligence tasks, such as image classification [10], speech recognition[2], etc. Especially, with the increase of accessibility to mobile devices, many of these tasks are running on embedded systems.

Improvement of the model’s accuracy was considered as a top priority objective in the early-stage deep learning research and thus resulted in the appearance of computational-hungry models. Even with the drastic computation capability improvement of graphical processing units (GPUs), which are known as the common practice platforms for training DNNs, training advanced DNN models may take several hours to multiple days [19]. However, there are major shortcomings when DNN models are deployed on embedded devices: 1) the computational capabilities of such devices are very limited, and 2) embedded devices are mostly battery-based and have energy consumption constraints even for simple tasks [3]. Therefore, the training process of DNNs is typically offloaded to the cloud as it requires a large amount of computation on large datasets. Once the model is trained, it will be used for inference on new unseen inputs. The inference process can be hosted privately on the local devices or as a public service on the cloud.

The communication cost of cloud-based inference can be also larger than the computation cost of running a small model locally. Collaborative approaches between the cloud, edge, and the mobile devices are proposed to co-optimize the communication and computation costs simultaneously [6, 20]. While cloud-based inference is easy to deploy and scale up, it compromises the data privacy and needs a reliable network connection. In some mission-critical applications running on embedded devices, such as drone navigation, it is required for the model to continuously improve or adapt to new unseen tasks. This emphasizes the requirement of reliable and privacy-preserving setups for training, which cannot be satisfied with cloud-based approaches. Therefore, for some applications running on resource-constrained devices, local training and inference are needed.

The generalization performance of DNNs is challenging because of the possible distribution misalignment between the training and test sets. Overfitting, i.e., learning too much from the training set, prevents the DNN model from performing well in unseen real environments despite the high accuracy on the training set. This is why well-known problems such as image classification are trained with millions of trained samples. However, for many applications, large labeled datasets are either unavailable or very expensive to annotate for training a model with a specific task. To mitigate the limited number of samples in practical settings, transfer learning [27, 16] and data augmentation [5]

are effective methods to improve the generalization of the learning model. Transfer learning is introduced to create pre-trained models on datasets that are considered similar to the dataset of the final task. As the initial model is trained only once with a large amount of data, the training can be done on cloud in which the computational cost is not a major concern. Later, the pre-trained model with optimized parameters is fine-tuned to fit the new problem. Data augmentation is another strategy that helps to increase the diversity of the training data without collecting new data. For instance, in computer vision applications, augmentation techniques such as mirroring the input image are commonly used to improve generalization. The main disadvantage of augmentation is that it enforces a linear increase in the number of feed-forward calls with respect to the number of augmentations used for fine-tuning.

In this paper, we present a novel idea that drastically reduces the computation work required for fine-tuning DNNs by augmenting the input in the embedding space instead of the raw input space. The paper makes the following contributions:

  • Introducing a novel method for augmentations in the embedding space instead of raw input space.

  • Analysis of the impact of our method on the computation and accuracy of transferred models with different network architectures.

Ii Background and Related Works

Fig. 1: Transfer learning in deep neural networks. The feature network is pre-trained on one dataset then its parameters are copied and freezed (non-trainable) for learning on the second dataset. Only the added downstream classification layers are trainable for the second dataset.

Transfer learning [27] is known as the status-quo approach in data-scarce scenarios. In this method, a base-model is first pre-trained for a data-rich task, referred to as the base-task. Throughout extensive training, the base-model learns how to extract high-level features from the base-dataset. These high-level features are referred to as embedding and are typically generated at the final layers of the DNN. The advantage of transferring knowledge to a downstream model is not only limited to higher accuracy, but also fast convergence and lower training computation.

The main idea of transfer learning for a data-scarce task is to transfer the mature feature extraction part of a pre-trained base-model to a target-network and fine-tune it on the target-dataset to fit the target-task. It is common practice to only update the parameters of the downstream layers after the embedding during fine-tuning and keep the other parameters fixed (frozen). The transfer learning procedure in DNNs is illustrated in Fig. 

1. In cases with very limited target-dataset samples, the problem is referred to as few-shot learning [12, 7]. The quality of such approaches is dependent on the extent of similarity between the distribution of the base and target tasks [26, 27].

Lack of sufficient labeled data may cause overfitting due to sampling bias in DNNs. Data augmentation is known as a powerful method to reach higher generalization and prevent overfitting by simply inflating the training data size. New samples are reproduced from a single sample while the label is not changed or is known without any further annotation [24]. In particular, for the case of image classification tasks [18, 9, 5], every training image sample can be modified by applying transformations such as horizontal and vertical flip, image rotation, random cropping, perturbations to brightness, contrast, color, etc.

One of the useful characteristics of convolutional neural networks (CNNs) is their translation equivariance. In other words, translating the input image is the same as translating the feature maps, due to the symmetry preserving characteristics of each layer. The equivariance relationship of other augmentations such as flips, scaling and rotation are further studied in  [14] by finding a relationship between representations of the original and transformed images. Furthermore, the operations in CNNs are extended in [4] to be formally equivariant to reflections. The impact of equivariance relationship has been extended to time-series data such as videos [1]. The main difference between our proposed method and the prior works is that instead of enforcing the model to be equivariant to augmentations, we learn the transformations that map the embedding of the original input to the augmented ones.

Iii Methodology

In the following section, we introduce our proposed method, i.e. replacing augmentation in the pixel space with one in the embedding space. Moreover, we elaborate upon on how this new idea saves computation when transferring models.

Iii-a Embedding

Deep convolutional neural networks

(DCNNs) are comprised of several convolutional blocks including convolutional filters, pooling, activation functions, etc., followed by one or a few

fully connected (FC) layers. Although there are still unanswered questions on the profound results of DCNNs, the belief is that multiple convolutional layers learn the intermediate and high-level features in different levels of abstraction between the input image and the output [15, 25].

In most computer vision applications, the embedding is a 1-dimensional vector with continuous values, in floating point. In practice, the output of the last convolutional layer and before the first FC layer is commonly considered as the embedding. While there is no certain understanding of what does a single feature represents in an embedding vector, they can meaningfully represent the semantic features of an image in a transformed space. In classification networks, the FC layers at the end of the network are responsible for mapping this embedding into different classes. Throughout this paper, we present the

feature generator sub-network with , the classifier sub-network with , and the embedding vector with . The notations used in this paper are summarized in Table I. The relationship between input, embedding, output, and the two sub-networks are given below:

Notation Description
Original image dataset and one image sample data, without any augmentation
Label of image
Predicted label of image
Augmentation function set
Image under augmentation in pixel space
Feature generator sub-network, usually convolutional layers
Classifier sub-network, usually FC layers
Embedding of image , usually a 1-D vector
Embedding of augmented image
Augmentation transformer for
TABLE I: Description of the notations used in this manuscript.
Fig. 2: Augmentation in the embedding space. Our procedure follows three steps: (a) a large model (feature network) is trained on the source dataset with all the augmentation functions. (b) pairs of the embedding of input and its augmented one are extracted from the feature network and used as the training set for learning the augmentation functions in the embedding space (). (c) The learned augmentation functions in the embedding space are used to accomplish the transfer learning for the target dataset.

Iii-B Augmentation in the pixel space

We present a scenario to illustrate the functionality of the proposed method. A DCNN model is implemented on a platform with limited computational and energy resources, such as a smartphone. The target-model is initially transferred from a base model, which is pre-trained on a separate platform, such as a cloud provider. This transferred model is designated to do a specific task, thus fine-tuning with new sample images is required. Using augmented images is crucial to improving the accuracy, however, it comes with the cost of higher computation. Specifically, generating augmented images from the original one for fine-tuning will increase the total computation of , which includes feed-forward, and optionally, the more complex back-propagation phases for the augmented samples. We refer to this method as ”augmentation in the pixel space” as the new images are augmented by techniques such as horizontal flip (mirroring), vertical flip, rotation, crop, etc. applied on the original image as illustrated in Fig. 2-a. The new relationships can be written as follows:


Iii-C Augmentation in the embedding space

The main part of our proposed method is that embedding of an augmented image () can be generated directly by using the embedding of the original image (). In other words, cab be approximately transformed into by a simple nonlinear function, referred to as augmentation transformers and represented with . Eq. 6 shows the functionality of for the specific augmentation .


A neural network with a few FC can be used to implement . Consequently, the parameters of can be optimized by training on the same dataset and computation platform as the base model. This process is done after the base model is completely trained and thus parameters of are kept fixed for later transfer learning on target devices. The input and output of the training process of are embeddings of original images (s) and embeddings of augmented images in the pixel space (). Input image from base dataset is passed through sub-network and its embedding () is recorded. In addition, an augmentation in the pixel space () is applied on the input image (), passed through , and its embedding is then collected. This process is repeated separately for different augmentations. The objective is to find the optimized parameters of each neural network to improve the approximation in Eq. 6. Mean squared error (MSE) is used as a simple measurement for similarity loss and the objective function can be formulated as Eq. 7. The parameters can be optimized by applying back-propagation of this loss using any gradient descent based method.


The flow of training the augmentation transformations is illustrated in Fig. 2-b. It should be emphasized that different augmentation transformations are implemented and trained with separate neural network models, even though they can have the same architectures. This step can be done on the cloud and can be passed alongside the base model to be used for later transfer learning purposes.

Iii-D Computation analysis

As shown in Fig. 2, the required total computation of training when an image augmentation is applied in the pixel space can be listed as the computation of embedding (), embedding to output (), and finally back-propagation on the layers which are not frozen, which are only the FC layers (). On the other hand, if augmentation is done in the embedding space, only the embedding of the original image are computed once for all different augmentations and then transformed using s for fine-tuning the model. The total computation in this scenario can be listed as transforming the embedding of the original image to the augmented one (), and similar to the first case and . Augmentation transformers (s) are simple nonlinear functions implemented with FC layers. The number of parameters in these layers and the required computation is expected to be much smaller than the ones for , therefore, is expected to be relatively much lower than . The relative total computation saving achieved by applying augmentation in the embedding space () instead of pixel space () when different augmentations are applied is formulated in Eq. 8:


In this equation, we ignored the cost of augmentation in the pixel space.

Iv Experiments and Simulation Results

The implementations are done with PyTorch

[17], cuDNN (v7.0) and CUDA (v10.1). We study our proposed method on image classification task and top-1 accuracy is reported.

Iv-a DCNN architectures and image datasets

For better evaluation of our proposed method, we did our experiments with different types of state-of-the-art DCNN architectures. VGG-16 [21] is an architecture with 16 layers including 13 convolutional layers proceeded by 3 FC layers. The family of ResNet [8]

architectures use identity shortcut connections that skip one or more layers. The main advantage of using residual blocks is overcoming the vanishing gradient problem in deep networks. The implementation of this architecture can have different deterministic depths. Considering our scenario on the implementation of our network on embedded devices, we chose ResNet-18 with 17 convolutional layers and a single FC layer at the end. Inception-V3 

[22, 23] is designed to have filters with multiple sizes to extract features even when the size of the salient part of the image is varying. We chose Inception-v3 [23] in our implementations. The size of the embeddings is 512 for the VGG-16, ResNet-18 and 1028 for Inception-V3. CIFAR100 and CIFAR10 [11] are chosen for the base and target datasets respectively.

We focus on horizontal and vertical flip augmentations as two of the widely used functions in computer vision applications. We have three different setups for the training of the base networks: 1) no augmentation 2) only horizontal flip, and 3) both horizontal and vertical flips to be applied in the pixel space. The results summarized in Table II show the importance of augmentation as a horizontal flip improves the evaluation results by about 10% on average. As mentioned earlier, the augmentation is very dependent on the dataset, and as the results suggest, the vertical flip did not help the training but reduced the accuracy by about . After this phase, the parameters of are fixed to be used as our base network for training s and transfer learning.

Iv-B Training augmentation transformers ()

The embedding transformation functions are implemented as two FC layers with ReLU activation serving as the nonlinearity of the hidden layers. The input and output sizes of

-s are the same as the base network’s embedding, and the size of the hidden layer is considered twice the input size. It should be mentioned a few other architectures such as deeper or wider ones were used, however, the results did not change significantly.

The training data in this step is the same as the base-data, i.e. CIFAR100. To train each for augmentation , the training image and its augmented one are fed through to generate and , to be used as input and output respectively. As s are simple shallow networks, stochastic gradient descent (SGD) optimization was used for back-propagation. As results demonstrated in Fig. 3 suggest, the evaluation loss of vertical augmentation transformer during training is much lower when the base network is trained with vertical augmentation in the pixel space. This emphasizes the hypothesis that the base network must be exposed to that specific augmentation so it can learn to generalize well.

Iv-C Transfer learning

For transfer learning on CIFAR10,

is detached from the network and is replaced with 3 consecutive FC layers (1024, 128, and 10 neurons) serving as a classifier. Parameters of

are fixed and only the is updated with back-propagation during fine-tuning. We have done four different experiments in this part. For the first case, the input images were not augmented either during the training of the baseline or fine-tuning. For the rest of the cases, we used the same baseline which was trained using horizontal augmentation in the pixel space but the transfer learning parts are different. In the second case, no augmentation has been applied while in another experiment, we applied the augmentations in the pixel space before the feed-forward step. Finally, our proposed method, augmentation in the embedding space is applied. The original image is fed through . For an arbitrary augmentation function , the embeddings of the original image are transformed by corresponding . This new embedding is then used to the fine-tune

. The final accuracy of these three models after fine-tuning for 100 epochs is given in Table 

II and the training curves are also demonstrated in Fig. 4. The results suggest that our method provides slightly lower accuracy than the augmentation in the pixel space baseline, but far better than fine-tuning without any augmentation.

Fig. 3: The MSE loss value of which is trained on vertical augmentation when the base models is trained with and without the vertical augmentation. As we see, when the base model is not trained with the target augmentation, the loss value is higher.
Network VGG-16 ResNet-18 Inception-v3
Augmentation Base Network (CIFAR100)
None 64.43% 61.89% 67.87%
Hor. flip 71.47% 74.48% 78.20%
Hor. & Ver. flip 71.47% 72.46% 75.85%
Augmentation Transfer Learning (CIFAR10)
[Pixel]-[Pixel] 64.44% 78.87% 83.98%
[Pixel]-[None] 62.20% 76.25% 82.22%
[Pixel]-[Embed.] 63.68% 78.03% 82.20%
[None]-[None] 56.31% 65.46% 75.23%
TABLE II: The accuracy of base and target models trained on CIFAR-100 and CIFAR-10 respectively. The augmentation setup is summarized as [Aug. for training of the base model]-[Aug. for fine-tuning the classifier () during transfer learning]
Fig. 4: The transfer learning accuracy results for three different deep models: (left) ResNet-18, (middle) VGG-16, (right) Inception-V3. Each figure shows four different scenarios: 1- The base and target models are both trained without any augmentation (red). 2- The base model is trained with pixel space augmentation and the target model is trained with embedding augmentation which is the proposed approach (yellow). 3- The base model is trained with pixel space augmentation and the target model is trained without any augmentation (black). 4- The base and target models are both trained with pixel space augmentation (green).

The required computation for the FC based s and sub-networks are negligible compared to the feed-forward pass of the transferred network. Therefore, if the network is required to be trained with the original image and only one other augmentation, the fine-tuning computation mainly consists of extracting the embeddings of the original data. As we are not extracting the embedding of the augmented input using our transferred network, almost saving in the computation can be achieved for each augmentation function.

V Conclusions and Future Work

In this paper, we presented a method for reducing the cost of data augmentation during the transfer learning of neural networks on embedded devices. The results show that our method reduces the computation drastically while the accuracy is negligibly affected. As future work, more complex augmentations and the effect of series of basic augmentations in the embedding space can be addressed.


This work has been done during the internship of Mohammad Saeed Abrishami and Amir Erfan Eshratifar at Clarifai Inc. This research was also sponsored in part by a grant from the Software and Hardware Foundations (SHF) program of the National Science Foundation (NSF).


  • [1] P. Agrawal, J. Carreira, and J. Malik (2015) Learning to see by moving. In ICCV, Cited by: §II.
  • [2] D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, E. Elsen, J. Engel, L. Fan, C. Fougner, A. Y. Hannun, B. Jun, T. Han, P. LeGresley, X. Li, L. Lin, S. Narang, A. Y. Ng, S. Ozair, R. Prenger, S. Qian, J. Raiman, S. Satheesh, D. Seetapun, S. Sengupta, A. Sriram, C. Wang, Y. Wang, Z. Wang, B. S. Xiao, Y. Xie, D. Yogatama, J. Zhan, and Z. Zhu (2015) Deep speech 2: end-to-end speech recognition in english and mandarin. In ICML, Cited by: §I.
  • [3] A. Canziani, E. Culurciello, and A. Paszke (2017) An analysis of deep neural network models for practical applications. ICLR. Cited by: §I.
  • [4] T. Cohen and M. Welling (2016) Group equivariant convolutional networks. In ICML, Cited by: §II.
  • [5] E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le (2019) AutoAugment: learning augmentation strategies from data. CVPR. Cited by: §I, §II.
  • [6] A. E. Eshratifar, M. S. Abrishami, and M. Pedram (2019) JointDNN: an efficient training and inference engine for intelligent mobile cloud computing services. IEEE Transactions on Mobile Computing (), pp. 1–1. Cited by: §I.
  • [7] A. E. Eshratifar, M. S. Abrishami, D. Eigen, and M. Pedram (2018) A meta-learning approach for custom model training. In AAAI, Cited by: §II.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016-06) Deep residual learning for image recognition. In CVPR, Cited by: §IV-A.
  • [9] D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen (2019) Population based augmentation: efficient learning of augmentation policy schedules. In ICML, Cited by: §II.
  • [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In NIPS 25, pp. 1097–1105. Cited by: §I.
  • [11] A. Krizhevsky (2009) Learning multiple layers of features from tiny images. Technical report University of Toronto. Cited by: §IV-A.
  • [12] B. M. Lake, R. Salakhutdinov, J. Gross, and J. B. Tenenbaum (2011) One shot learning of simple visual concepts. In CogSci, Cited by: §II.
  • [13] Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. Nature 521 (), pp. 436–444. External Links: Document, ISSN 1476-4687 Cited by: §I.
  • [14] K. Lenc and A. Vedaldi (2015) Understanding image representations by measuring their equivariance and equivalence. In CVPR, Cited by: §II.
  • [15] H. Mhaskar, Q. Liao, and T. Poggio (2017) When and why are deep networks better than shallow ones?. In AAAI, Cited by: §III-A.
  • [16] M. Oquab, L. Bottou, I. Laptev, and J. Sivic (2014) Learning and transferring mid-level image representations using convolutional neural networks. CVPR. Cited by: §I.
  • [17] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. NIPS. Cited by: §IV.
  • [18] L. Perez and J. Wang (2017) The effectiveness of data augmentation in image classification using deep learning. CoRR abs/1712.04621. Cited by: §II.
  • [19] R. Raina, A. Madhavan, and A. Y. Ng (2009)

    Large-scale deep unsupervised learning using graphics processors

    In ICML, pp. 873–880. External Links: ISBN 978-1-60558-516-1, Link, Document Cited by: §I.
  • [20] S. Shahhosseini, I. Azimi, A. Anzanpour, A. Jantsch, P. Liljeberg, N. Dutt, and A. M. Rahmani (2019) Dynamic computation migration at the edge: is there an optimal choice?. In GLSVLSI, Cited by: §I.
  • [21] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §IV-A.
  • [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2014) Going deeper with convolutions. CVPR. Cited by: §IV-A.
  • [23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2015) Rethinking the inception architecture for computer vision. CVPR. Cited by: §IV-A.
  • [24] T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid (2017) A bayesian data augmentation approach for learning deep models. In NIPS, Cited by: §II.
  • [25] T. Wiatowski and H. Bölcskei (2018) A mathematical theory of deep convolutional neural networks for feature extraction. IEEE Transactions on Information Theory. Cited by: §III-A.
  • [26] Y. Xu, S. J. Pan, H. Xiong, Q. Wu, R. Luo, H. Min, and H. Song (2017) A unified framework for metric transfer learning. IEEE Transactions on Knowledge and Data Engineering. Cited by: §II.
  • [27] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In NIPS, Cited by: §I, §II, §II.