1. Introduction
Pervasive mobile devices, sensors and Internet of Things (IoT) are nowadays producing everincreasing amounts of data. The recent resurgence in neural networks—the deeplearning revolution, further opens the door for intelligent data interpretation, turning the data and information into actions that create new capabilities, richer experiences and unprecedented economic opportunities. For example, deep neural network (DNN) has become the
de facto technique that is making breakthroughs in a myriad of realworld applications ranging from image processing, speech recognition, object detection, game playing and driverless cars (lecun2015deep, ; szegedy2016overview, ; silver2016alphago, ; web6, ; web5, ; web7, ).The marriage of big data and deep learning
leads to the great success of artificial intelligence, but it also raises new challenges in data communication, storage and computation
(soro2009survey, ) incurred by the growing amount of distributed data and the increasing DNN model size. For resourceconstrained IoT applications, while recent researches have been conducted (liu2016memristor, ; han2016eie, ) to handle the computation and memoryintensive DNN workloads in an energy efficient manner, there lack efficient solutions to reduce the powerhungry data offloading and storage on terminal devices like edge sensors, especially in face of the stringent constraints on communication bandwidth, energy and hardware resources. Recent studies show that the latencies to upload a JPEGcompressed input image (i.e. 152KB) for a single inference of a popular CNN–“AlexNet” via stable wireless connections with 3G (870ms), LTE (180ms) and WiFi (95ms), can exceed that of DNN computation (682ms) by a mobile or cloudGPU (kang2017neurosurgeon, ). Moreover, the communication energy is comparable with the associated DNN computation energy.Data compression is an indispensable technique that can greatly reduce the data volume needed to be stored and transferred, thus to substantially alleviate the data offloading and local storage cost in terminal devices. As DNNs are contingent upon tons of realtime produced data, it is crucial to compress the overwhelming data effectively. Existing image compression frameworks (such as JPEG) can compress data aggressively, but they are often optimized for the HumanVisual System (HVS) or human’s perceived image quality, which can lead to unacceptable DNN accuracy degradation at higher compression ratios (CR) and thus significantly harm the quality of intelligent services. As shown later, testing a welltrained AlexNet using compressed JPEG images (w.r.t. high quality images ), can lead to image recognition accuracy reduction for the large scale dataset— ImageNet, almost offsetting the improvement brought by more complex DNN topology, i.e. from AlexNet to GoogLeNet (8 layers, 724M MACs v.s. 22 layers, 1.43G MACs) (krizhevsky2012imagenet, ; szegedy2015going, ). This prompts the need of developing an DNNfavorable deep compression framework.
In this work, we for the first time develop a high efficient image compression framework specifically target on DNN, named DeepNJPEG. Unlike existing compressions that are developed by taking the human perceived distortions as the top priority, DeepNJPEG preserves important features crucial for DNN classification with guaranteed accuracy and compression rate, thus to drastically lower the cost incurred by data transmission and storage in resourcelimited edge devices. Our major contributions are:

We propose a semianalytical model to capture the image processing mechanism differences between human visual system (HVS) and deep neural network at frequency domain;

We develop an DNNfavorable feature refinement methodology by leveraging the statistical frequency component analysis of various image classes;

We propose piecewise linear mapping function to link statistical information of refined features to individual quantization values in the quantization table, thus to optimize the compression rate with minimized accuracy drop.
Experimental results show that DeepNJPEG can achieve much higher compression efficiency (i.e.) than that of JPEG solution while maintaining the same accuracy level with the same hardware cost, demonstrating the great potentials for its applications in lowcost and ultralow power terminal devices, i.e. edge sensors.
2. Background and Motivation
2.1. Basics of Deep Neural Networks
DNN introduces multiple layers with complex structures to model a highlevel abstraction of the data (hinton2006reducing, )
, and exhibits high effectiveness in finding hierarchical patterns in highdimensional data by leveraging the deep cascaded layer structure
(he2016deep, ; krizhevsky2012imagenet, ; simonyan2014very, ; szegedy2015going, ). Specifically, the convolutional layer extracts sufficient feature maps from the inputs by applying kernelbased convolutions, the pooling layer performs a downsampling operation (through max or mean pooling) along the spatial dimensions for a volume reduction, and the fullyconnected layer further computes the class score based on the weighted results and nonlinear activation functions. Softmax regression (or multinomial logistic regression)
(bishop2006pattern, ) is usually adopted in the last layer of most DNNs for a final decision.To perform realistic image recognition, the DNN hyperparameters are trained extensively through an overwhelming amount of input data. For instance, the largescale dataset–ImageNet (imagenet_cvpr09, ), which consists of 1.3 Million high resolution image samples ( Gigabyte) in 1K categories, is dedicated to training stateoftheart DNN models for image recognition task.
2.2. HVSbased JPEG Compression
It is widely agreed that massive images and videos, as the major context to be understood by deep neural networks, dominate the wireless bandwidth and storage ranging from edge devices to servers. Hence, in this work, we focus on the image compression.
JPEG (wallace1992jpeg, ) is one of the most popular lossy compression standards for digital images. It also forms the foundation of most commonly used video compression formats like Motion JPEG (MPEG) and H.264 etc (ratnakar2000efficient, ). As shown in Fig. 1, for each color component, i.e. the RGB channels, the input image is first divided into nonoverlapping pixel blocks, then 2D Fourier Discrete Cosine (DCT) Transform is applied at each block to generate 64 DCT coefficients , , , of which is direct current (DC) coefficient, and are 63 alternating current (AC) coefficients. Each 64 DCT coefficients are quantized and rounded to the nearest integers as , here is the individual parameter of the 64element quantization table provided by JPEG (wallace1992jpeg, ). The table is designed to preserve the lowfrequency components and discard highfrequency details because human visual system (HVS) is less sensitive to the information loss in high frequency bands (zhang2017just, ). As a manytoone mapping, such quantization is fundamentally lossy (i.e. at the decompress stage), and can generate more shared quantized coefficients (i.e. zeros) for a better compression. After quantization, all the quantized coefficients are ordered into the “zigzag” sequence following the frequency increasing. Finally, the differential coded DC and runlength coded AC coefficients will be further compressed by lossless Huffman or Arithmetic Coding. Increasing (reducing) the compression ratio (CR) can be usually realized by scaling down (up) the quantization table through adjusting the quantization factor (QF). A larger QF indicates better image quality but a lower CR. A reserved procedure of aforementioned steps can decompress an image.
2.3. Inefficient HVS Compression for DNNs
DNN suffers from dramatic accuracy loss if using existing HVSbased compression techniques to aggressively compress the input images for more efficient data offloading and storage: To explore how existing compressions can impact the accuracy of DNN, we have conducted following two sets of experiments: CASE 1: training DNN model by high quality JPEG images (QF=100), but testing it with images at various CRs or QFs (i.e. QF=100, 50, 20); CASE 2: training DNN model by various compressed images (QF=100, 50, 20), but testing it only with high quality original images (QF=100). In both cases, a representative DNN example–“AlexNet” (krizhevsky2012imagenet, ) with 5 convolutional layers, 3 fully connected layers and 60M weight parameters is trained with the ImageNet dataset for large scale visual recognition.
As Fig. 2 (a) shows, the “top1” testing accuracies characterized from both cases degrade significantly as the CR increases from 1 to 5 (or QF from 100 to 20). To achieve the best CR (QF=20, CR=5), the accuracy of CASE 1 (CASE 2) can be even dropped by () than that of the original one (QF=100, CR=1). Note that the accuracy improvement of ImageNet from “AlexNet” to “GoogLeNet” is merely , despite of the significant increased number of layers (8 v.s. 22) and multiplyandaccumulates (724M v.s. 1.43G). We also observe that “CASE 2” can always exhibit smaller accuracy reduction than “CASE 1” across all CRs ranging from CR=3 to CR=5. This clearly indicates that training the DNN with more compressed JPEG images (compared with testing ones) can slightly alleviate the accuracy dropping, but cannot completely address this issue. As Fig. 2 (b) shows, the accuracy gap between a higher CR (or low QF, i.e. QF=20) and the original one (CR=1) for CASE 2 is maximized at the last testing epoch. Apparently, existing compressions like JPEG, which are centered around human visual system, are not optimized solutions for DNNs, especially at a higher compression ratio.
3. Our Approach
Developing efficient compression frameworks has been widely studied in applications like image and video processing, however, all these researches are taking the human perceived distortions as the top priority, rather than the unique properties of deep neural networks, such as accuracy, deep cascaded data processing, etc. In this section, we first discover the different views of human visual system and deep neural network in image processing, and then propose the DNNfavorable JPEGbased image compression framework–“DeepNJPEG”.
3.1. Modeling the difference of HVS and DNN
We have initialized our studies on an interesting problem: What are the major differences of image processing between human vision system (HVS) and deep neural network? This should help on explaining the aforementioned accuracy reduction issue, thus to guide the development of DNNfavorable compression framework. Our observation is that DNNs can response to any important frequency component precisely, but human visual system focuses more on the low frequency information than high frequency ones, indicating fewer features to be learned by DNNs after the HVSinspired compression. Assume is a single pixel of a raw image X, and can be represented by DCT in JPEG compression:
(1) 
where and are the DCT coefficient and corresponding basis function at 64 different frequencies, respectively. Because the human visual system is less sensitive to high frequency components, a higher CR can be achieved in JPEG compression by intentionally discarding the high frequency parts, i.e. zeroing out the associated DCT coefficient through scaled quantization. On the contrary, DNNs examine the importance of the frequency information in a quite different way. The gradient of the DNN function with respect to a basis function can be calculated as:
(2) 
Eq. 2 implies that the contribution of a frequency component () of a single pixel to the DNN learning will be mainly determined by its associated DCT coefficient () and the importance of the pixel (). Here is obtained after the DNN training, while will be distorted by the image compression (i.e. quantization) before training. If , the frequency feature (), which may carry important details for DNN feature map extraction, cannot be learned by DNN for weights updating, causing a lower accuracy.
It is often the case in a highly compressed JPEG image, given that s of high frequency parts (usually small in nature images) are quantized to zero to ensure a better compression rate. As a result, DNNs can easily misclassify aggressively compressed images if their original versions contain important high frequency features. In CASE 1 (see Fig. 2(a)), the DNN model trained with original images learns comprehensive features, especially high frequency ones that are important in some images, however, such features are actually lost in some more compressed testing images, causing considerable misclassification rate. Fig. 3 demonstrates such an example–the “junco” is mispredicted as “robin” after removing the top six high frequency components, despite that the differences are almost indistinguishable by human eyes. In CASE 2 (see Fig. 2(b)), the model is trained to make decisions solely based on the limited number of features learned from more compressed training images, and the additional features in high quality testing images cannot be detected by DNN for accuracy improvement.
3.2. DNNOriented DeepNJPEG Framework
To develop the “DeepNJEPG” framework, it is essential to minimize the distortion of frequency features that are most important to DNN, thus to maintain the accuracy as much as possible. As quantization is the principle factor to cause important feature loss, i.e. removing less significant high frequency parts by using a larger quantization step in JPEG, the key step of “DeepNJEPG” is to redesign such HVSinspired quantization table to be DNN favorable, i.e. achieving a better compression rate than JPEG without losing needed features. Although the quantization table redesign has been proved to be a feasible solution in various applications, such as feature detection (chao2013design, ), visual search (duan2012optimizing, ), it is an intractable optimization problem for “DeepNJPEG” because of the complexity of parameter searching (hopkins2017simulated, ), and the difficulty of a quantitative measurement suitable to DNNs. For example, it is nontrivial to characterize the implicit relationship between image feature (or quantization) errors and DNN accuracy loss. Moreover, the characterized results could vary according to the DNN structure. Therefore, it is very challenging to develop a generalized DNNfavorable compression framework.
Our analysis in section 3.1 indicates that the contribution of a frequency band to DNN learning is strongly related with the magnitude of the band coefficient. Inspired by this key observation, our “DeepNJEPG” is developed upon a heuristic design flow (see Fig. 4): 1) Sample representative raw images from each class and further characterize the importance of each frequency component through frequency analysis on sampled sub dataset; 2) Link the statistical information of each feature with the quantization step of quantization table through proposed “Piecewise Linear Mapping”.
3.2.1. Image Sampling and Frequency Component Analysis
In “DeepNJPEG” framework, our first step is to sample all classes within the labeled dataset, for more comprehensive feature analysis. To extract the representative features from the whole dataset and rank the importance of those features to DNN, we
implied the feature complexity of the image–smooth image with simple features will be compressed at small size while large size indicates the image consists of more complex features. characterize the unquantized DCT coefficient distribution at each frequency band, since the distribution represents the energy of a frequency component (Rei:TC1983, ). Previous studies (Rei:TC1983, )
have proven that the unquantized coefficient can be approximated as normal (or Laplace) distribution with zero mean but different standard deviations (
). A larger indicates more energy in the band , hence more contributions to the DNN feature learning. As shown in algorithm 1, each sampled image will be first partitioned into blocks, followed by a blockwise DCT. After that, the DCT coefficient distribution at each frequency band will be characterized by sorting all coefficients within the same frequency band across all image blocks collected from different classes of the image dataset. The statistical information, such as the standard deviation of each coefficient, will be calculated based on each individual histogram. Note that such a frequency refinement procedure can precisely tell out the most significant features to DNN, and is different from the simple assumption that low frequency part is always more important than the high ones can easily lead to the DNN accuracy reduction.3.2.2. Quantization Table Design
Once the importance of frequency band to DNN is identified by our calibrated DCT coefficient standard deviation, our next question becomes how to link those information to the quantization table design to achieve a higher compression rate with minimized accuracy reduction. The basic idea is to introduce less (more) quantization errors at the critical (less critical) band by leveraging the intrinsic error resilience property of the DNN. To introduce nonuniform quantization errors at different frequency bands, we develop a piecewise linear mapping function (PLM) to derive the quantization step of each frequency band from the associated standard deviation:
(3) 
where is the quantization step at the frequency band . is the lowest quantization step. , , , , , are fitting parameters. and are thresholds to categorize the 64 frequency bands according to the , i.e. ascending order of the magnitude of . As right part of Fig. 4 shows, following the similar frequency segmentation in (kaur2011steganographic, ), the 64 frequency components are divided into three bands: Low Frequency (LF)–16 frequency components (largest ), Middle Frequency (MF)–728 and High Frequency (HF)–2964 (smallest ). Hence, we adopt and in our design. Three different slopes–, , , are assigned to HF band, MF band and LF band, respectively.
4. Design Optimization
In this section, we explore the parameter optimization for our proposed Piecewise Linear Mapping based quantization table design. In order to set optimized parameters of Eq. 3, i.e. , and , we first study the sensitivity of quantization steps to DNN accuracy across the LF, MF and HF bands. We define our proposed band allocation in “DeepNJPEG” as the “magnitude based”, i.e. to segment the frequency band into three types (LF/MF/HF) according to the magnitude of standard deviation of DCT coefficient. For comparison purpose, we also implement the coarsegrained band assignment method based on its position within a default JPEG quantization table, namely “position based”. We conduct the simulations by only varying the quantization steps of interested frequency bands, while all the others are assigned with minimized quantization steps, i.e. without introducing any quantization errors.
Frequency Band Segmentation. As Fig. 5 shows, “magnitude based” method can always achieve better accuracy than that of “position based” in both MF and HF bands as the quantization step increases. Moreover, our solution can provide a larger quantization step in both MF and HF bands without accuracy reduction, i.e. 40 v.s. 60 in HF band, which can translate into a higher compression rate than that of JPEG. Besides, we also observe that DNN accuracy starts to drop if at the LF band, which indicates that statistically the largest DCT coefficients are most sensitive to quantization errors, thus we set as the lower bound of quantization value to secure the accuracy (see Fig. 5 (a)). Similarly, based on the critical points of Fig 5 (b) and (c), we can further obtain the quantization steps at the point and , thus to determine the parameters such as , , and .
Tuning in LF Band. Unlike the parameters in MF and HF bands, the optimization of in LF band is nontrivial because of its significant impact to accuracy and compression rate. Since cannot be directly decided according to the lower bound and , we investigate the correlation between compress rate and accuracy based on a variety of . As shown in Fig. 6, a smaller can offer a better compression rate by slightly sacrificing the DNN accuracy. Based on our observation, we choose to maximize the compression rate while maintaining the original accuracy.
5. Evaluation
Our experiments are conducted on the deep learning open source framework Torch
(torch, ). The “DeepNJPEG” framework is implemented by heavily modifying the open source JPEG framework (IJG, ). The largescale ImageNet (imagenet_cvpr09, ) dataset is adopted to measure the improvement of compression rate and classification accuracy. Specifically, all images are maintained as their original scales in our evaluation without any speedup trick such as resize or preprocessing. The optimized parameters of “DeepNJPEG” framework dedicated to ImageNet are as follows: . Four stateoftheart DNN models are evaluated in our experiment–AlexNet (krizhevsky2012imagenet, ), VGG (simonyan2014very, ), GoogLeNet (szegedy2015going, ) and ResNet (he2016deep, ).5.1. Compression Rate and Accuracy
We first evaluate the compression rate and classification accuracy of our proposed DeepNJPEG framework. Three baseline designs are implemented for comparison purpose: the “original” dataset compressed by JPEG (QF=100, CR=1), “RMHF” compressed dataset and “SAMEQ” compressed dataset. Specifically, “RMHF” is extended from JPEG by removing the topN high frequency components from the quantization table to further improve the compression rate, and “SAMEQ” denotes a more aggressive compression method with the same quantization step for all frequency components.
Fig. 7 compares the compression rate and accuracy based on the “ImageNet” dataset “AlexNet” DNN model for all selected candidates. Compared with the “original”, “RMHF” slightly increases the compression rate () by removing more highest frequency components (top3–top9), while “SAMEQ” achieves better compression rates (). However, both schemes suffer from increased accuracy reduction (w.r.t. “original”) as long as the compression rate becomes higher. On the contrary, our “DeepNJPEG” delivers the best compression rate (i.e. ) while maintaining the similar high accuracy as that of original dataset, indicating a promising solution to reduce the cost of data traffic and storage of edge devices for deep learning tasks.
Generality of DeepNJPEG. We also extend our evaluations across several stateoftheart DNNs to study how the “DeepNJPEG” framework responses to different DNN architectures, including GoogLeNet, VGG16, ResNet34 and ResNet50. As shown in Fig. 8, our proposed “DeepNJPEG” can always maintain the original accuracies (w.r.t. “Original”) for all selected DNN models. Although JPEG can easily achieve a similar compression rate as that of “DeepNJPEG” by largely reducing the JPEG QF value, e.g. , such an aggressive “data lossy” compression results in significant side effect on the classification performance of all selected DNN models. In contrast, “DeepNJPEG” can preserve both high compression rate and accuracy for all DNNs, thus a generalized solution.
5.2. Power Consumption
In resourceconstraint terminal devices, the data offloading incurred power consumption can even outperform that of DNN computation in deep learning (kang2017neurosurgeon, ). Date compression can reduce the associated cost. Following the same measurement in (kang2017neurosurgeon, ), Fig. 9 shows the results of power reduction breakdown. Our “DeepNJPEG” based data processing consumes only energy without accuracy reduction when compared with that of original dataset. Compared with “RMHF3” (remove the top3 high frequency components in quantization table) and “SAMEQ4” (the same quantization value–4 in quantization table), “DeepNJPEG” can still achieve and power reduction respectively, due to more efficient data compression.
6. Conclusion
The everincreasing data transfer and storage overhead significantly challenges the energy efficiency and performance of largescale DNNs. In this paper, we propose a DNN oriented image compression framework, namely “DeepNJPEG”, to ease the storage and data communication overhead. Instead of the Human Vision System inspired JPEG compression, our solution effectively reduces the quantization error based on the frequency component analysis and rectified quantization table, and further increases the compress rate without any accuracy degradation. Our experimental results show that “DeepNJPEG” achieves compression rate improvement, and consumes only power of the conventional JPEG without classification accuracy degradation, thus a promising solution for data storage and communication for deep learning.
References
 (1) Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
 (2) C. Szegedy, “An overview of deep learning,” AITP 2016, 2016.
 (3) D. Silver and D. Hassabis, “Alphago: Mastering the ancient game of go with machine learning,” Research Blog, 2016.
 (4) https://research.fb.com/category/facebookairesearchfair/.
 (5) https://cloudplatform.googleblog.com/2016/05/Googlesuperchargesmachinelearningtaskswithcustomchip.html.
 (6) https://www.microsoft.com/enus/research/researcharea/artificialintelligence/.
 (7) S. Soro and W. Heinzelman, “A survey of visual sensor networks,” Advances in multimedia, vol. 2009, 2009.
 (8) C. Liu, Q. Yang, B. Yan, J. Yang, X. Du, W. Zhu, H. Jiang, Q. Wu, M. Barnell, and H. Li, “A memristor crossbar based computing engine optimized for high speed and accuracy,” in VLSI (ISVLSI), 2016 IEEE Computer Society Annual Symposium on. IEEE, 2016, pp. 110–115.
 (9) S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “Eie: efficient inference engine on compressed deep neural network,” in Proceedings of the 43rd International Symposium on Computer Architecture. IEEE Press, 2016, pp. 243–254.
 (10) Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon: Collaborative intelligence between the cloud and mobile edge,” in Proceedings of the TwentySecond International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 2017, pp. 615–629.

(11)
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in
Advances in neural information processing systems, 2012, pp. 1097–1105. 
(12)
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2015, pp. 1–9.  (13) G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
 (14) K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
 (15) K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
 (16) C. M. Bishop, Pattern recognition and machine learning. springer, 2006.
 (17) J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei, “ImageNet: A LargeScale Hierarchical Image Database,” in CVPR09, 2009.
 (18) G. K. Wallace, “The jpeg still picture compression standard,” IEEE transactions on consumer electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992.
 (19) V. Ratnakar and M. Livny, “An efficient algorithm for optimizing dct quantization,” IEEE Transactions on Image Processing, vol. 9, no. 2, pp. 267–270, 2000.
 (20) X. Zhang, S. Wang, K. Gu, W. Lin, S. Ma, and W. Gao, “Justnoticeable differencebased perceptual optimization for jpeg compression,” IEEE Signal Processing Letters, vol. 24, no. 1, pp. 96–100, 2017.
 (21) J. Chao, H. Chen, and E. Steinbach, “On the design of a novel jpeg quantization table for improved feature detection performance,” in Image Processing (ICIP), 2013 20th IEEE International Conference on. IEEE, 2013, pp. 1675–1679.
 (22) L.Y. Duan, X. Liu, J. Chen, T. Huang, and W. Gao, “Optimizing jpeg quantization table for low bit rate mobile visual search,” in Visual Communications and Image Processing (VCIP), 2012 IEEE. IEEE, 2012, pp. 1–6.
 (23) M. Hopkins, M. Mitzenmacher, and S. WagnerCarena, “Simulated annealing for jpeg quantization,” arXiv preprint arXiv:1709.00649, 2017.
 (24) R. Reininger and J. Gibson, “Distributions of the twodimensional dct coefficients for images,” IEEE Transactions on Communications, vol. 31, no. 6, pp. 835–839, Jun 1983.
 (25) B. Kaur, A. Kaur, and J. Singh, “Steganographic approach for hiding image in dct domain,” International Journal of Advances in Engineering & Technology, vol. 1, no. 3, p. 72, 2011.
 (26) R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlablike environment for machine learning,” in BigLearn, NIPS Workshop, 2011.
 (27) IJG. Independent jpeg group. [Online]. Available: http://www.ijg.org/
Comments
There are no comments yet.