DeepN-JPEG: A Deep Neural Network Favorable JPEG-based Image Compression Framework

03/14/2018 ∙ by Zihao Liu, et al. ∙ Florida International University Indiana University University of Miami Syracuse University 0

As one of most fascinating machine learning techniques, deep neural network (DNN) has demonstrated excellent performance in various intelligent tasks such as image classification. DNN achieves such performance, to a large extent, by performing expensive training over huge volumes of training data. To reduce the data storage and transfer overhead in smart resource-limited Internet-of-Thing (IoT) systems, effective data compression is a "must-have" feature before transferring real-time produced dataset for training or classification. While there have been many well-known image compression approaches (such as JPEG), we for the first time find that a human-visual based image compression approach such as JPEG compression is not an optimized solution for DNN systems, especially with high compression ratios. To this end, we develop an image compression framework tailored for DNN applications, named "DeepN-JPEG", to embrace the nature of deep cascaded information process mechanism of DNN architecture. Extensive experiments, based on "ImageNet" dataset with various state-of-the-art DNNs, show that "DeepN-JPEG" can achieve 3.5x higher compression rate over the popular JPEG solution while maintaining the same accuracy level for image recognition, demonstrating its great potential of storage and power efficiency in DNN-based smart IoT system design.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Pervasive mobile devices, sensors and Internet of Things (IoT) are nowadays producing ever-increasing amounts of data. The recent resurgence in neural networks—the deep-learning revolution, further opens the door for intelligent data interpretation, turning the data and information into actions that create new capabilities, richer experiences and unprecedented economic opportunities. For example, deep neural network (DNN) has become the

de facto technique that is making breakthroughs in a myriad of real-world applications ranging from image processing, speech recognition, object detection, game playing and driver-less cars (lecun2015deep, ; szegedy2016overview, ; silver2016alphago, ; web6, ; web5, ; web7, ).

The marriage of big data and deep learning

leads to the great success of artificial intelligence, but it also raises new challenges in data communication, storage and computation 

(soro2009survey, ) incurred by the growing amount of distributed data and the increasing DNN model size. For resource-constrained IoT applications, while recent researches have been conducted (liu2016memristor, ; han2016eie, ) to handle the computation and memory-intensive DNN workloads in an energy efficient manner, there lack efficient solutions to reduce the power-hungry data offloading and storage on terminal devices like edge sensors, especially in face of the stringent constraints on communication bandwidth, energy and hardware resources. Recent studies show that the latencies to upload a JPEG-compressed input image (i.e. 152KB) for a single inference of a popular CNN–“AlexNet” via stable wireless connections with 3G (870ms), LTE (180ms) and Wi-Fi (95ms), can exceed that of DNN computation (682ms) by a mobile or cloud-GPU (kang2017neurosurgeon, ). Moreover, the communication energy is comparable with the associated DNN computation energy.

Data compression is an indispensable technique that can greatly reduce the data volume needed to be stored and transferred, thus to substantially alleviate the data offloading and local storage cost in terminal devices. As DNNs are contingent upon tons of real-time produced data, it is crucial to compress the overwhelming data effectively. Existing image compression frameworks (such as JPEG) can compress data aggressively, but they are often optimized for the Human-Visual System (HVS) or human’s perceived image quality, which can lead to unacceptable DNN accuracy degradation at higher compression ratios (CR) and thus significantly harm the quality of intelligent services. As shown later, testing a well-trained AlexNet using compressed JPEG images (w.r.t. high quality images ), can lead to image recognition accuracy reduction for the large scale dataset— ImageNet, almost offsetting the improvement brought by more complex DNN topology, i.e. from AlexNet to GoogLeNet (8 layers, 724M MACs v.s. 22 layers, 1.43G MACs) (krizhevsky2012imagenet, ; szegedy2015going, ). This prompts the need of developing an DNN-favorable deep compression framework.

In this work, we for the first time develop a high efficient image compression framework specifically target on DNN, named DeepN-JPEG. Unlike existing compressions that are developed by taking the human perceived distortions as the top priority, DeepN-JPEG preserves important features crucial for DNN classification with guaranteed accuracy and compression rate, thus to drastically lower the cost incurred by data transmission and storage in resource-limited edge devices. Our major contributions are:

  1. We propose a semi-analytical model to capture the image processing mechanism differences between human visual system (HVS) and deep neural network at frequency domain;

  2. We develop an DNN-favorable feature refinement methodology by leveraging the statistical frequency component analysis of various image classes;

  3. We propose piece-wise linear mapping function to link statistical information of refined features to individual quantization values in the quantization table, thus to optimize the compression rate with minimized accuracy drop.

Experimental results show that DeepN-JPEG can achieve much higher compression efficiency (i.e.) than that of JPEG solution while maintaining the same accuracy level with the same hardware cost, demonstrating the great potentials for its applications in low-cost and ultra-low power terminal devices, i.e. edge sensors.

2. Background and Motivation

2.1. Basics of Deep Neural Networks

DNN introduces multiple layers with complex structures to model a high-level abstraction of the data (hinton2006reducing, )

, and exhibits high effectiveness in finding hierarchical patterns in high-dimensional data by leveraging the deep cascaded layer structure 

(he2016deep, ; krizhevsky2012imagenet, ; simonyan2014very, ; szegedy2015going, )

. Specifically, the convolutional layer extracts sufficient feature maps from the inputs by applying kernel-based convolutions, the pooling layer performs a downsampling operation (through max or mean pooling) along the spatial dimensions for a volume reduction, and the fully-connected layer further computes the class score based on the weighted results and non-linear activation functions. Softmax regression (or multinomial logistic regression

(bishop2006pattern, ) is usually adopted in the last layer of most DNNs for a final decision.

To perform realistic image recognition, the DNN hyper-parameters are trained extensively through an overwhelming amount of input data. For instance, the large-scale dataset–ImageNet (imagenet_cvpr09, ), which consists of 1.3 Million high resolution image samples ( Gigabyte) in 1K categories, is dedicated to training state-of-the-art DNN models for image recognition task.

2.2. HVS-based JPEG Compression

Figure 1. Briefly overview of JPEG compression technology .

It is widely agreed that massive images and videos, as the major context to be understood by deep neural networks, dominate the wireless bandwidth and storage ranging from edge devices to servers. Hence, in this work, we focus on the image compression.

JPEG (wallace1992jpeg, ) is one of the most popular lossy compression standards for digital images. It also forms the foundation of most commonly used video compression formats like Motion JPEG (MPEG) and H.264 etc (ratnakar2000efficient, ). As shown in Fig. 1, for each color component, i.e. the RGB channels, the input image is first divided into non-overlapping pixel blocks, then 2D Fourier Discrete Cosine (DCT) Transform is applied at each block to generate 64 DCT coefficients , , , of which is direct current (DC) coefficient, and are 63 alternating current (AC) coefficients. Each 64 DCT coefficients are quantized and rounded to the nearest integers as , here is the individual parameter of the 64-element quantization table provided by JPEG (wallace1992jpeg, ). The table is designed to preserve the low-frequency components and discard high-frequency details because human visual system (HVS) is less sensitive to the information loss in high frequency bands (zhang2017just, ). As a many-to-one mapping, such quantization is fundamentally lossy (i.e. at the decompress stage), and can generate more shared quantized coefficients (i.e. zeros) for a better compression. After quantization, all the quantized coefficients are ordered into the “zig-zag” sequence following the frequency increasing. Finally, the differential coded DC and run-length coded AC coefficients will be further compressed by lossless Huffman or Arithmetic Coding. Increasing (reducing) the compression ratio (CR) can be usually realized by scaling down (up) the quantization table through adjusting the quantization factor (QF). A larger QF indicates better image quality but a lower CR. A reserved procedure of aforementioned steps can decompress an image.

Figure 2.

(a) Accuracy v.s. JPEG CRs of “AlexNet” for CASE 1/2; (b) CASE 2–Accuracy w.r.t Epoch Number at various CRs.

2.3. Inefficient HVS Compression for DNNs

DNN suffers from dramatic accuracy loss if using existing HVS-based compression techniques to aggressively compress the input images for more efficient data offloading and storage: To explore how existing compressions can impact the accuracy of DNN, we have conducted following two sets of experiments: CASE 1: training DNN model by high quality JPEG images (QF=100), but testing it with images at various CRs or QFs (i.e. QF=100, 50, 20); CASE 2: training DNN model by various compressed images (QF=100, 50, 20), but testing it only with high quality original images (QF=100). In both cases, a representative DNN example–“AlexNet” (krizhevsky2012imagenet, ) with 5 convolutional layers, 3 fully connected layers and 60M weight parameters is trained with the ImageNet dataset for large scale visual recognition.

As Fig. 2 (a) shows, the “top-1” testing accuracies characterized from both cases degrade significantly as the CR increases from 1 to 5 (or QF from 100 to 20). To achieve the best CR (QF=20, CR=5), the accuracy of CASE 1 (CASE 2) can be even dropped by () than that of the original one (QF=100, CR=1). Note that the accuracy improvement of ImageNet from “AlexNet” to “GoogLeNet” is merely , despite of the significant increased number of layers (8 v.s. 22) and multiply-and-accumulates (724M v.s. 1.43G). We also observe that “CASE 2” can always exhibit smaller accuracy reduction than “CASE 1” across all CRs ranging from CR=3 to CR=5. This clearly indicates that training the DNN with more compressed JPEG images (compared with testing ones) can slightly alleviate the accuracy dropping, but cannot completely address this issue. As Fig. 2 (b) shows, the accuracy gap between a higher CR (or low QF, i.e. QF=20) and the original one (CR=1) for CASE 2 is maximized at the last testing epoch. Apparently, existing compressions like JPEG, which are centered around human visual system, are not optimized solutions for DNNs, especially at a higher compression ratio.

3. Our Approach

Developing efficient compression frameworks has been widely studied in applications like image and video processing, however, all these researches are taking the human perceived distortions as the top priority, rather than the unique properties of deep neural networks, such as accuracy, deep cascaded data processing, etc. In this section, we first discover the different views of human visual system and deep neural network in image processing, and then propose the DNN-favorable JPEG-based image compression framework–“DeepN-JPEG”.

3.1. Modeling the difference of HVS and DNN

Figure 3. Feature degradation will impact the classification.

We have initialized our studies on an interesting problem: What are the major differences of image processing between human vision system (HVS) and deep neural network? This should help on explaining the aforementioned accuracy reduction issue, thus to guide the development of DNN-favorable compression framework. Our observation is that DNNs can response to any important frequency component precisely, but human visual system focuses more on the low frequency information than high frequency ones, indicating fewer features to be learned by DNNs after the HVS-inspired compression. Assume is a single pixel of a raw image X, and can be represented by DCT in JPEG compression:

(1)

where and are the DCT coefficient and corresponding basis function at 64 different frequencies, respectively. Because the human visual system is less sensitive to high frequency components, a higher CR can be achieved in JPEG compression by intentionally discarding the high frequency parts, i.e. zeroing out the associated DCT coefficient through scaled quantization. On the contrary, DNNs examine the importance of the frequency information in a quite different way. The gradient of the DNN function with respect to a basis function can be calculated as:

(2)

Eq. 2 implies that the contribution of a frequency component () of a single pixel to the DNN learning will be mainly determined by its associated DCT coefficient () and the importance of the pixel (). Here is obtained after the DNN training, while will be distorted by the image compression (i.e. quantization) before training. If , the frequency feature (), which may carry important details for DNN feature map extraction, cannot be learned by DNN for weights updating, causing a lower accuracy.

It is often the case in a highly compressed JPEG image, given that s of high frequency parts (usually small in nature images) are quantized to zero to ensure a better compression rate. As a result, DNNs can easily misclassify aggressively compressed images if their original versions contain important high frequency features. In CASE 1 (see Fig. 2(a)), the DNN model trained with original images learns comprehensive features, especially high frequency ones that are important in some images, however, such features are actually lost in some more compressed testing images, causing considerable misclassification rate. Fig. 3 demonstrates such an example–the “junco” is mis-predicted as “robin” after removing the top six high frequency components, despite that the differences are almost indistinguishable by human eyes. In CASE 2 (see Fig. 2(b)), the model is trained to make decisions solely based on the limited number of features learned from more compressed training images, and the additional features in high quality testing images cannot be detected by DNN for accuracy improvement.

Figure 4.

An overview of heuristic design flow of “DeepN-JPEG” framework.

3.2. DNN-Oriented DeepN-JPEG Framework

To develop the “DeepN-JEPG” framework, it is essential to minimize the distortion of frequency features that are most important to DNN, thus to maintain the accuracy as much as possible. As quantization is the principle factor to cause important feature loss, i.e. removing less significant high frequency parts by using a larger quantization step in JPEG, the key step of “DeepN-JEPG” is to re-design such HVS-inspired quantization table to be DNN favorable, i.e. achieving a better compression rate than JPEG without losing needed features. Although the quantization table redesign has been proved to be a feasible solution in various applications, such as feature detection (chao2013design, ), visual search (duan2012optimizing, ), it is an intractable optimization problem for “DeepN-JPEG” because of the complexity of parameter searching (hopkins2017simulated, ), and the difficulty of a quantitative measurement suitable to DNNs. For example, it is non-trivial to characterize the implicit relationship between image feature (or quantization) errors and DNN accuracy loss. Moreover, the characterized results could vary according to the DNN structure. Therefore, it is very challenging to develop a generalized DNN-favorable compression framework.

Our analysis in section 3.1 indicates that the contribution of a frequency band to DNN learning is strongly related with the magnitude of the band coefficient. Inspired by this key observation, our “DeepN-JEPG” is developed upon a heuristic design flow (see Fig. 4): 1) Sample representative raw images from each class and further characterize the importance of each frequency component through frequency analysis on sampled sub dataset; 2) Link the statistical information of each feature with the quantization step of quantization table through proposed “Piece-wise Linear Mapping”.

3.2.1. Image Sampling and Frequency Component Analysis

In “DeepN-JPEG” framework, our first step is to sample all classes within the labeled dataset, for more comprehensive feature analysis. To extract the representative features from the whole dataset and rank the importance of those features to DNN, we

implied the feature complexity of the image–smooth image with simple features will be compressed at small size while large size indicates the image consists of more complex features. characterize the un-quantized DCT coefficient distribution at each frequency band, since the distribution represents the energy of a frequency component (Rei:TC1983, ). Previous studies (Rei:TC1983, )

have proven that the un-quantized coefficient can be approximated as normal (or Laplace) distribution with zero mean but different standard deviations (

). A larger indicates more energy in the band , hence more contributions to the DNN feature learning. As shown in algorithm 1, each sampled image will be first partitioned into blocks, followed by a block-wise DCT. After that, the DCT coefficient distribution at each frequency band will be characterized by sorting all coefficients within the same frequency band across all image blocks collected from different classes of the image dataset. The statistical information, such as the standard deviation of each coefficient, will be calculated based on each individual histogram. Note that such a frequency refinement procedure can precisely tell out the most significant features to DNN, and is different from the simple assumption that low frequency part is always more important than the high ones can easily lead to the DNN accuracy reduction.

1 C: # of Classes; N: # of images in each class; k: Interval for sampling images; Spath: Path of Sampled Images; Nsamp: #number of sampled images; : Image in frequency domain; : Frequency components; Nblock: # of 8*8 blocks after block-wise DCT; : standard deviation of kth frequency components; foreach class in [ .. ] do
        m = 0; // count the number of images in certain class
2        foreach image in [ .. ] do
3              m++; if m % k = 0  then
4                      Spath record (Path of )
5              
6       
7foreach image in [ .. ] do
8        = 8*8 block-wise DCT () foreach  in [1 .. Nblock] do
               = [j*8-8:j*8][j*8-8:j*8]// ith sampled image jth 8*8 block
9               foreach  in [1 .. 64] do
                      store // ith sampled image jth 8*8 block kth frequency component
10                     
11              
12       
// Statistical Analysis
13 foreach  in [1 .. 64] do
14        calculate standard deviation
return // standard deviation of each frequency components
Algorithm 1 Frequency component analysis Algorithm

3.2.2. Quantization Table Design

Once the importance of frequency band to DNN is identified by our calibrated DCT coefficient standard deviation, our next question becomes how to link those information to the quantization table design to achieve a higher compression rate with minimized accuracy reduction. The basic idea is to introduce less (more) quantization errors at the critical (less critical) band by leveraging the intrinsic error resilience property of the DNN. To introduce nonuniform quantization errors at different frequency bands, we develop a piece-wise linear mapping function (PLM) to derive the quantization step of each frequency band from the associated standard deviation:

(3)

where is the quantization step at the frequency band . is the lowest quantization step. , , , , , are fitting parameters. and are thresholds to categorize the 64 frequency bands according to the , i.e. ascending order of the magnitude of . As right part of Fig. 4 shows, following the similar frequency segmentation in (kaur2011steganographic, ), the 64 frequency components are divided into three bands: Low Frequency (LF)–1-6 frequency components (largest ), Middle Frequency (MF)–7-28 and High Frequency (HF)–29-64 (smallest ). Hence, we adopt and in our design. Three different slopes–, , , are assigned to HF band, MF band and LF band, respectively.

4. Design Optimization

Figure 5. Parameter optimization for different frequency bands.
Figure 6. Optimization of parameter in PLM.

In this section, we explore the parameter optimization for our proposed Piece-wise Linear Mapping based quantization table design. In order to set optimized parameters of Eq. 3, i.e. , and , we first study the sensitivity of quantization steps to DNN accuracy across the LF, MF and HF bands. We define our proposed band allocation in “DeepN-JPEG” as the “magnitude based”, i.e. to segment the frequency band into three types (LF/MF/HF) according to the magnitude of standard deviation of DCT coefficient. For comparison purpose, we also implement the coarse-grained band assignment method based on its position within a default JPEG quantization table, namely “position based”. We conduct the simulations by only varying the quantization steps of interested frequency bands, while all the others are assigned with minimized quantization steps, i.e. without introducing any quantization errors.

Frequency Band Segmentation. As Fig. 5 shows, “magnitude based” method can always achieve better accuracy than that of “position based” in both MF and HF bands as the quantization step increases. Moreover, our solution can provide a larger quantization step in both MF and HF bands without accuracy reduction, i.e. 40 v.s. 60 in HF band, which can translate into a higher compression rate than that of JPEG. Besides, we also observe that DNN accuracy starts to drop if at the LF band, which indicates that statistically the largest DCT coefficients are most sensitive to quantization errors, thus we set as the lower bound of quantization value to secure the accuracy (see Fig. 5 (a)). Similarly, based on the critical points of Fig 5 (b) and (c), we can further obtain the quantization steps at the point and , thus to determine the parameters such as , , and .

Tuning in LF Band. Unlike the parameters in MF and HF bands, the optimization of in LF band is non-trivial because of its significant impact to accuracy and compression rate. Since cannot be directly decided according to the lower bound and , we investigate the correlation between compress rate and accuracy based on a variety of . As shown in Fig. 6, a smaller can offer a better compression rate by slightly sacrificing the DNN accuracy. Based on our observation, we choose to maximize the compression rate while maintaining the original accuracy.

5. Evaluation

Our experiments are conducted on the deep learning open source framework Torch 

(torch, ). The “DeepN-JPEG” framework is implemented by heavily modifying the open source JPEG framework (IJG, ). The large-scale ImageNet (imagenet_cvpr09, ) dataset is adopted to measure the improvement of compression rate and classification accuracy. Specifically, all images are maintained as their original scales in our evaluation without any speed-up trick such as resize or pre-processing. The optimized parameters of “DeepN-JPEG” framework dedicated to ImageNet are as follows: . Four state-of-the-art DNN models are evaluated in our experiment–AlexNet (krizhevsky2012imagenet, ), VGG (simonyan2014very, ), GoogLeNet (szegedy2015going, ) and ResNet (he2016deep, ).

Figure 7. The compress rate and accuracy for different methods.
Figure 8. The compress rate and accuracy for different DNN models.

5.1. Compression Rate and Accuracy

We first evaluate the compression rate and classification accuracy of our proposed DeepN-JPEG framework. Three baseline designs are implemented for comparison purpose: the “original” dataset compressed by JPEG (QF=100, CR=1), “RM-HF” compressed dataset and “SAME-Q” compressed dataset. Specifically, “RM-HF” is extended from JPEG by removing the top-N high frequency components from the quantization table to further improve the compression rate, and “SAME-Q” denotes a more aggressive compression method with the same quantization step for all frequency components.

Fig. 7 compares the compression rate and accuracy based on the “ImageNet” dataset “AlexNet” DNN model for all selected candidates. Compared with the “original”, “RM-HF” slightly increases the compression rate () by removing more highest frequency components (top-3–top-9), while “SAME-Q” achieves better compression rates (). However, both schemes suffer from increased accuracy reduction (w.r.t. “original”) as long as the compression rate becomes higher. On the contrary, our “DeepN-JPEG” delivers the best compression rate (i.e. ) while maintaining the similar high accuracy as that of original dataset, indicating a promising solution to reduce the cost of data traffic and storage of edge devices for deep learning tasks.

Generality of DeepN-JPEG. We also extend our evaluations across several state-of-the-art DNNs to study how the “DeepN-JPEG” framework responses to different DNN architectures, including GoogLeNet, VGG-16, ResNet-34 and ResNet-50. As shown in Fig. 8, our proposed “DeepN-JPEG” can always maintain the original accuracies (w.r.t. “Original”) for all selected DNN models. Although JPEG can easily achieve a similar compression rate as that of “DeepN-JPEG” by largely reducing the JPEG QF value, e.g. , such an aggressive “data lossy” compression results in significant side effect on the classification performance of all selected DNN models. In contrast, “DeepN-JPEG” can preserve both high compression rate and accuracy for all DNNs, thus a generalized solution.

5.2. Power Consumption

In resource-constraint terminal devices, the data offloading incurred power consumption can even outperform that of DNN computation in deep learning (kang2017neurosurgeon, ). Date compression can reduce the associated cost. Following the same measurement in  (kang2017neurosurgeon, ), Fig. 9 shows the results of power reduction breakdown. Our “DeepN-JPEG” based data processing consumes only energy without accuracy reduction when compared with that of original dataset. Compared with “RM-HF3” (remove the top-3 high frequency components in quantization table) and “SAME-Q4” (the same quantization value–4 in quantization table), “DeepN-JPEG” can still achieve and power reduction respectively, due to more efficient data compression.

Figure 9. Evaluation of power consumption for different methods.

6. Conclusion

The ever-increasing data transfer and storage overhead significantly challenges the energy efficiency and performance of large-scale DNNs. In this paper, we propose a DNN oriented image compression framework, namely “DeepN-JPEG”, to ease the storage and data communication overhead. Instead of the Human Vision System inspired JPEG compression, our solution effectively reduces the quantization error based on the frequency component analysis and rectified quantization table, and further increases the compress rate without any accuracy degradation. Our experimental results show that “DeepN-JPEG” achieves compression rate improvement, and consumes only power of the conventional JPEG without classification accuracy degradation, thus a promising solution for data storage and communication for deep learning.

References

  • (1) Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • (2) C. Szegedy, “An overview of deep learning,” AITP 2016, 2016.
  • (3) D. Silver and D. Hassabis, “Alphago: Mastering the ancient game of go with machine learning,” Research Blog, 2016.
  • (4) https://research.fb.com/category/facebook-ai-research-fair/.
  • (5) https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html.
  • (6) https://www.microsoft.com/en-us/research/research-area/artificial-intelligence/.
  • (7) S. Soro and W. Heinzelman, “A survey of visual sensor networks,” Advances in multimedia, vol. 2009, 2009.
  • (8) C. Liu, Q. Yang, B. Yan, J. Yang, X. Du, W. Zhu, H. Jiang, Q. Wu, M. Barnell, and H. Li, “A memristor crossbar based computing engine optimized for high speed and accuracy,” in VLSI (ISVLSI), 2016 IEEE Computer Society Annual Symposium on.   IEEE, 2016, pp. 110–115.
  • (9) S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “Eie: efficient inference engine on compressed deep neural network,” in Proceedings of the 43rd International Symposium on Computer Architecture.   IEEE Press, 2016, pp. 243–254.
  • (10) Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon: Collaborative intelligence between the cloud and mobile edge,” in Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems.   ACM, 2017, pp. 615–629.
  • (11)

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in neural information processing systems, 2012, pp. 1097–1105.
  • (12) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2015, pp. 1–9.
  • (13) G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
  • (14) K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • (15) K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • (16) C. M. Bishop, Pattern recognition and machine learning.   springer, 2006.
  • (17) J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
  • (18) G. K. Wallace, “The jpeg still picture compression standard,” IEEE transactions on consumer electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992.
  • (19) V. Ratnakar and M. Livny, “An efficient algorithm for optimizing dct quantization,” IEEE Transactions on Image Processing, vol. 9, no. 2, pp. 267–270, 2000.
  • (20) X. Zhang, S. Wang, K. Gu, W. Lin, S. Ma, and W. Gao, “Just-noticeable difference-based perceptual optimization for jpeg compression,” IEEE Signal Processing Letters, vol. 24, no. 1, pp. 96–100, 2017.
  • (21) J. Chao, H. Chen, and E. Steinbach, “On the design of a novel jpeg quantization table for improved feature detection performance,” in Image Processing (ICIP), 2013 20th IEEE International Conference on.   IEEE, 2013, pp. 1675–1679.
  • (22) L.-Y. Duan, X. Liu, J. Chen, T. Huang, and W. Gao, “Optimizing jpeg quantization table for low bit rate mobile visual search,” in Visual Communications and Image Processing (VCIP), 2012 IEEE.   IEEE, 2012, pp. 1–6.
  • (23) M. Hopkins, M. Mitzenmacher, and S. Wagner-Carena, “Simulated annealing for jpeg quantization,” arXiv preprint arXiv:1709.00649, 2017.
  • (24) R. Reininger and J. Gibson, “Distributions of the two-dimensional dct coefficients for images,” IEEE Transactions on Communications, vol. 31, no. 6, pp. 835–839, Jun 1983.
  • (25) B. Kaur, A. Kaur, and J. Singh, “Steganographic approach for hiding image in dct domain,” International Journal of Advances in Engineering & Technology, vol. 1, no. 3, p. 72, 2011.
  • (26) R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” in BigLearn, NIPS Workshop, 2011.
  • (27) IJG. Independent jpeg group. [Online]. Available: http://www.ijg.org/