1 Introduction
Digital images are almost always compressed, exploiting their massive spatial and statistical redundancies in order to save storage space and/or transmission rates. The common practice is to use standard lossy coding formats, such as JPEG, JPEG2000, HEIF, or others. Lossy compression implies a permitted deviation between the resulting compresseddecompressed image and its original version. This error can be controlled by the bitbudget given to the image, creating the wellknown ratedistortion tradeoff, which is at the very foundation of information theory [9].
If these images are to be fed to a classification machine for recognition purposes, the compression distortion may induce errors in the decisions made. In such scenarios we are to consider three performance measures that are at odds with each other: rate, distortion, and classification accuracy. This work focuses on this ratedistortionaccuracy, tradeoff, aiming to show that improved compression performance are within reach while preserving the standard coding paradigm.
As a case study, our paper focuses on JPEG compression. Among the various available image coding methods, JPEG holds a unique status, being the most commonly used and widely spread. This image format^{1}^{1}1JPEG is a decompression format, leaving some freedom in the design of the encoder. is the defacto default in digital cameras and cellphones, in all browsers, and in every image editing software package. JPEG’s popularity could be attributed to its relative simplicity, hardware friendliness, reasonable ratedistortion performance, and beyond all these, the perfect timing it had in getting to the market. And so, while much betterperforming compression algorithms are already available, JPEG’s dominance of the market does not seem to be challenged in the near future.
This popularity has motivated past and present attempts to extract the best performance from JPEG while preserving its essence. In this work we target the choice of the two quantization tables used within the JPEG coding process (see Figure 2). The Luma and the Chroma are quantized in the DCT domain while operating on blocks. The relative quantization stepsizes for each coefficient are stored in these two tables. Most JPEG packages offer default values, and many vendors adopt these as is. Are these default tables the best possible ones? As we show in this paper, the answer is negative, and room exists for an improvement of JPEG by redesigning these tables in various ways.
Previous work from the early 90’s and since have already identified the potential in better designing the JPEG quantization tables, considering various approaches [11, 6, 27, 38, 28, 13, 12, 8, 39, 40, 35, 15, 37, 30, 32, 36, 18, 17, 20, 10, 25, 7]. The main effort has been directed to ratedistortion performance improvement, using derivativefree optimization techniques. More recent work also considered tuning these tables for better recognition results. More on these is described in Section 2. While addressing the same general goals, the approach we take in this paper is markedly different. We offer a continuous optimization strategy for tuning these two tables, while considering the abovementioned three performance measures: rate, distortion and accuracy.
Our work considers two different design goals and two modes of optimization. As for the design goals, we consider both ratedistortion and rateaccuracy objectives – the first aims to set the quantization tables for getting the smallest error after compressiondecompression for any given bit rate, while the second sets those tables so as to provide the most accurate recognitionrates. We address these goals by considering two optimization setups: universal and perimage modes of work. In the universal case we optimize the choice of the quantization tables for a large corpus of images, essentially proposing a replacement to the commonlyused default values. The second setup aims to fit the best tables for each image so as to extract better JPEG performance.
Broadly speaking, we formulate each of the above design problems as a nonconvex yet smooth optimization task, where the loss to be minimized varies from one case to another. In all cases, the JPEG encoding, decoding and its bitrate evaluation are all replaced with differentiable proxies. In the classification case, the loss includes a penalty for the accuracy of ResNet [14] (or MobileNet [16]
) over the ImageNet dataset. The optimization itself is performed using a minibatch gradient descent algorithm, and a use of backpropagation.
Extensive experiments presented in this paper expose the surprising ability to substantially improve JPEG performance in the two considered scenarios. See Figure 1 for two illustrative examples. In terms of ratedistortion, we show a gain of up to 25% in filesize while maintaining the same image quality (measured in PSNR). Similarly, we show an ability for an increase of up to in classification accuracy. Our experiments show that the optimized tables for MobileNet are just as effective for ResNet, implying that one optimized set may serve various recognition/classification architectures. We note that our overall methodology could easily be fitted to other compression standards by constructing their differentiable implementation and defining their parameters to be tuned.
2 Related Work
The important role that the quantization tables play in JPEG has been recognized and exploited in past work for forensics, steganography and more (e.g. [11, 6]). In this paper we focus on improving JPEG performance by retuning these tables, a topic that has been investigated in past work as well. In the following we briefly account for the relevant literature on this subject, emphasizing the objectives targeted and the means (i.e. algorithms) for getting their results.
An obvious and expected line of work has dealt with a direct attempt to improve JPEG ratedistortion performance [27, 38, 28, 13, 12, 8, 39, 40, 35, 15]. Papers offering such a treatment differ mainly in the optimization strategy adopted, as the techniques used include simulated annealing [27, 15], coordinatedescent [38, 13, 12, 39], dynamic programming [28]
, genetic and evolutionary algorithms
[8], exhaustive separable search [40] and a swarm intelligence method [35]. Note that all these methods employ derivativefree optimization strategies due to the complex endtoend function being treated. The work reported in [13] stands out in this group, as it uses the coordinatedescent approach for targeting an imageadaptive adjustment of the quantization tables.A related line of activity tunes the quantization tables for better visual quality or improved matching to the human visual system [37, 30, 32, 36, 18]. The core idea behind these papers is to optimize the tables while observing the output quality, assessed either via a simplified model of the human visual system, or by relying on subjective tests.
A recent group of papers has been looking at ways to adjust JPEG such that recognition tasks are better served [17, 20, 10, 25, 7, 21]. These papers span a range of decision tasks and optimization techniques. [17]
uses a direct ratedistortion optimization on a dedicated face image dataset in order to better handle face recognition. Both
[20] and [10] use an evolutionary algorithm, the first for better recognition of eye iris images, and the second optimized for visual search results via pairwise image matching. [25, 7]consider general scalespace feature detection accuracy, and optimize the quantization tables using simple frequency domain considerations. In this context, we also mention a parallel body of work that touches on the same goal of improving classification results, referring to alternative compression methods
[24, 4, 5].Our work differs from the above in two distinct ways. First, as we replace JPEG with a differentiable proxy, we can use continuous optimization methods. Adopting a deeplearning point of view, we use the minibatch gradient descent and backpropagation, which provide a better potential to reach deeper minima values. Second, our treatment is general, fusing the above and more modes of design into one holistic scheme. Indeed, our work could be considered as an extension of the broad view in
[23, 33, 34] that proposed an optimization of a general image preprocessing stage while using a recognitionrelated or other losses.3 The Proposed Methodology
We now describe our methodology for the optimal design of the quantization tables. We start by introducing our notations.
3.1 Differentiable JPEG
We denote by the JPEG compressiondecompression of the image using quality factor and quantization tables . The compressiondecompression process is illustrated in Figure 2.
is used within our loss function and thus it should be differentiable. Our implementation
^{2}^{2}2Our implementation refers to the YUV420 and YUV444 LumaChroma subsampling, but it can easily be adapted to alternative options. follows the one reported in [31]. Next we explain each step of the differentiable JPEG encoder/decoder shown in Figure 2.
Color conversion: An RGB image is converted YUV color space. Since this color conversion is a matrix multiplication, its derivatives are well defined.

Chroma downsampling/upsampling: The YUV image can represent full chroma (YUV444), or subsampled CHROMA values (YUV420). The downsampling operation is a
average pooling (YUV444 to YUV420), and the upsampling is implemented with Bilinear interpolation (YUV420 to YUV444).

DCT and inverse DCT: The DCT coefficients () are computed for image blocks of each YUV color channel, separately. Note that the DCT operation and its inverse are matrix multiplications, and hence differentiable.

Quantization/Dequantization: The DCT quantization is performed by tables . Note that rounding operation has zero derivative almost everywhere, and consequently can not be used in our gradientbased learning framework. To alleviate this problem, as Shin et al [31] suggested, a thirdorder polynomial approximation of the rounding operation as can be used.
3.2 Entropy Prediction
We define the function that returns an estimate of the bitrate consumed for JPEG compressing of using quality factor and quantization tables . Recall that when using JPEG with a fixed quality factor, the bitrate is unknown as it depends on the input image in a nontrivial way. For the function , we use the entropy estimator proposed in [2], which operates on the quantized DCT coefficients.
The approximated entropy can be expressed as
(1) 
where represents the quantized DCT coefficients, and
denotes the probability mass function. As shown in
[1], is a continuous relaxation of the probability mass function , where is additive i.i.d. uniform noise with the same minimum and maximum as the quantization bins. This means that the differential entropy of can be used as an approximation of . As suggested in [2], the density functioncan be closely approximated by a nonparametric model consisting of a 3layer neural network with 3 channels for each hidden layer followed by a Sigmoid nonlinearity. Details of this approach are discussed in
[2] and an online implementation is available in [3].To imitate the actual JPEG encoder, we employ separate entropy estimators for each DCT channel (Luma and Chroma) and DC/AC coefficients (zero frequency and nonezero frequencies). This means that a total of four entropy estimators are trained in our framework. The overall entropy is the sum of these estimated entropies. We also use the DPCM (Differential Pulse Code Modulation) approach to encode the difference of adjacent DC components in JPEG blocks. Let be the th DCT component () of the th block (, where is the number of blocks) for the Luma () and Chroma () channels respectively, denote the approximate entropy parameterized by , we have
(2) 
where , , are the estimated bitrate for the Luma / Chroma channels respectively. The overall estimated bitrate is given by .
Results from our entropy predictor and the actual JPEG bitrates are presented in Figure 3. To generate these results, we randomly sampled 100 images from ImageNet test set and varied the JPEG quality factor in range. These results show a strong linear correlation of between the estimated and actual BPPs. We also observed that for BPP, our method slightly overestimates the actual bitrate.
3.3 Classification Loss
The function denotes the standard softmax cross entropy for predictions from the image with respect to the reference label . This function’s evaluation includes within it the activation of the classification network in order to produce the predicted label, and the chosen loss between this label and the reference one.
3.4 Overall Training Loss
Using all the above differentiable ingredients, our loss function per image and the quality factor is given by the following continuous function:
(3) 
where and are three weight coefficients governing the importance of the rate, distortion and classification losses, respectively. By modifying these weights we change the design goal of our optimization. Minimizing this function with respect to via minibatch gradient descent and backpropagation leads to the designed optimal quantization tables.
The above loss refers to the perimage case, where the quantization tables are best fitted for a single image and a specific quality factor . We should note that while this mode of operation makes perfect sense for the ratedistortion optimization, it is impractical when classification is involved (i.e. if ). This is due to the need to have the groundtruth label for the image in the optimization loss, information that is unavailable when a new image is given. Still, such a design goal for the quantization tables is of interest as it sets an upperbound on the attainable accuracy when these tables are somehow imageadapted.
When handling a corpus of images and working in a range of quality factors , the loss function will simply be a summation over these domains,
4 Results
In this section we discuss our experimental results. We optimize and evaluate our framework on the ImageNet benchmark [29]. In the following we first give a detailed overview of the experiment setup, and then go over the results for ratedistortion and rateaccuracy. We report our results from optimizing a single tablepair for all images (universal), and also the perimage case. Ablation studies are also included to further elaborate on parameter choices made. The optimized quantization tables for each task are presented in the Appendix section.
4.1 Experimental Setup
For all experiments we use images of size pixels (resized using bilinear interpolation) so as to be consistent with the input dimensions used for training the classification networks used hereafter. The quantization tables are optimized by minimizing the objective in Eq. (3.4). To evaluate the performance with the obtained tables, we generate the ratedistortion (or accuracy) curves by scaling the tables by a range of quality factors. The scaled tables are rounded and clipped to before being used for compression. All images are compressed with libjpegturbo [22]. For each fixed quality factor, we average the bitsperpixel (BPP), and the PSNR (or classification accuracy) to produce one point on the ratedistortion (or accuracy) curves. For the universal case, the tables are optimized on the ImageNet training set, and evaluated on the eval set. For the perimage case, we randomly choose images from the eval set and directly optimize on these. We note that the perimage case computes a single table for each image, and does not need to generalize to other images.
We tune the weights and (or ) in Eq. (3.4) so as to achieve the best overall performance across the full range of bit rates. The effect of adjusting these weights will be shown in the ablation study in Section 4.4. For all experiments, we set the range of quality factors to be randomly sampled from at optimization time. All models are optimized for 1M steps using ADAM with a learning rate of , and initialized with the default JPEG quantization tables. Also, our optimization batch size is set as across all experiments.
4.2 RateDistortion Optimized Performance
This start by presenting the results for the ratedistortion optimization task. For this experiment the loss weights in Eq. (3.4) are set as and . The optimized ratedistortion curves are shown in Figure 4, considering both YUV420 and no chroma subsampling (YUV444). As can be seen, compression with the proposed quantization tables leads to a consistent improvement in PSNR for a fixed bitrate. This improvement is more significant for . For instance, at bitsperpixel, our customized tables outperform the default ones by nearly dB. Conversely, our tables achieve a reduction in file size while maintaining the same image quality at PSNR=dB. Samples of compressed images using the optimized quantization tables and the default ones are shown in Figure 5. As can be seen, the found quantization tables reduce the filesize of both images while preserving the image quality, assessed via PSNR. All images are compressed with the libjpegturbo library[22].
4.2.1 Comparison With SJPEG
We compare our method with SJPEG [26]
, an opensource JPEG compression library that supports imageadaptive optimizations of JPEG quantization tables. On a highlevel, SJPEG performs coordinate descent to minimize the ratedistortion objective
, similar to [13], where is dynamically chosen to approximate the slope of the current point on the RD curve. Figure 5 shows our universal method outperforms SJPEG, especially in the range of BPP. We emphasize that SJPEG [26] adapts the quantization table per image and per quality value, whereas our method uses a single table for all quality values and images. For a fair comparison, we only use SJPEG to optimize the quantization table, and apply libjpegturbo with the optimized tables from SJPEG for the final stats.Default  Ours  Default  Ours 
PSNR=dB  PSNR=dB  PSNR=dB  PSNR=dB 
BPP=  BPP= ()  BPP=  BPP= () 
PSNR=dB  PSNR=dB  PSNR=dB  PSNR=dB 
BPP=  BPP= ()  BPP=  BPP= () 
4.2.2 PerImage Optimized Tables
Next, we compare the performance of the universally optimized tables with the perimage obtained ones. The hyperparameters
and are set exactly as in the universal case and the optimization uses the same parameters. The results are evaluated on the same set of randomly sampled images from the ImageNet validation set, which were used also in the universal and the default tests, and the shown ratedistortion curve is an average over this image set.(a)  (b) 
As shown in Figure 6, optimizing the quantization tables on a perimage basis leads to a further improvement of around dB compared to the universal case for lower bit rates, and around dB for higher bit rates.
4.3 Classification Optimized Performance
In the context of rateaccuracy (RC) performance, we report the results of our method on three networks: ResNetV2101 [14], ResNetV250 [14], and MobileNetV3 [16]. To optimize the JPEG quantization tables for a classification task, we set the loss weights in Eq. (3.4) to , , and set . All tables are optimized on the ImageNet training set, and tested on its evaluation set.
Figure 7 presents the rateaccuracy curves for the default and universal tables. The attained gains in accuracy is also plotted for easier visualization. We note that the gain is larger for MobileNetV3, likely due to the fact that it has a smaller network capacity compared to ResNet, and hence is more sensitive to compression artifacts.
(a) BPP vs Accuracy for default table  (b) Accuracy Gain 
Figure 8 brings three examples where our optimized tables fix an incorrect classification, while using a lower bitrate. The probabilities reported are obtained from the classification network, representing the confidence in the decisions made. Though visually similar, we can see (best viewed zoomed in) that images from the obtained tables enhance certain texture regions of the image.
Default  Ours  Default  Ours 
“swab”  “broom”  “Dinmont”  “Norfolk terrier” 
BPP =  BPP =  BPP =1.649  BPP =1.683 
Prob =  Prob =  Prob =  Prob = 
4.3.1 Comparison with Sorted Random Search (SRS)
Sorted Random Search [21] is a simple yet effective method for optimizing the quantization tables for classification. For a fair comparison, we randomly search for 1000 tables as in [21], and randomly sample 5000 images from the training set for each candidate’s rateaccuracy evaluation. This ensures that SRS has seen roughly the same number of examples as our minibatch optimization. We choose a Pareto optimal table closest to 0.8 BPP for MobileNetV3. Figure 9 (b) shows that our table outperforms SRS on a range of BPP values. A possible explanation is that the SRS method needs a sufficiently large and wellsampled (such as the MatchedFrequency strategy used in [21]
) set perepoch, whereas our method fully takes advantange of the benefits of minibatch backpropagation and results in better performance.
4.3.2 PerImage Results
We turn to present the perimage results. The same subset of 1024 images as in Section 4.2 is used for evaluation. Figure 9 shows a large increase in terms of accuracy. As already mentioned in Section 3, the recognition loss for the perimage tables contains the ground truth label of the image, and hence the perimage method is not a practical approach (unless the ground truth label is available, such as in identification tests). Nevertheless, the result still has value as it shows the performance limit of adapting the JPEG quantization table for the purpose of boosting the classification accuracy.
(a)  (b) 
4.3.3 Generalization to Other Networks
Next, we show that the accuracy performance generalizes across different networks. Figure 10 illustrates the classification accuracy against BPP evaluated using ResNet101, for a quantization tables optimized on MobileNetV3. Observe that the classification accuracy improves across a wide range of rates, even though the table was computed with a different network loss.
4.4 Ablation Study
Figure 11 (a) shows the effect of adjusting the hyperparameter relative to for ratedistortion optimization. We vary from to , and fix . We see that for larger values of , the quantization table in general performs better for lower bitrates, and worse for higher ones. We also observe that the performance is not particularly sensitive to these hyperparameters, as indicated by the proximity of the RD curves for and . All experiments are optimized and evaluated under the same setup (except ) as in Section 4.2, with Chroma subsampling.
In Figure 11 (b)(d), a similar analysis on the loss weights is provided by varying and fixing , while considering three classification networks – MobileNetV3, ResNet50 and ResNet101. Referring to MobileNetV3, we see that larger values of result in a higher gain for lower BPP, and less so when the BPP is high.
(a) RateDistortion  (b) RateAccuracy (MobileNetV3) 
(c) RateAccuracy (ResNet50)  (d) RateAccuracy (ResNet101) 
5 Conclusion
Lossy image compression methods trade filesize (rate) for faithfulness to the original image content (distortion). A third performance measure influenced by such coding is classification accuracy, as affected by the induced error in the image. In this paper we offer an investigation of the tradeoff between these three performance measures, analyzed in the context of the JPEG compression algorithm. We show that JPEG’s quantization tables can be optimized for better ratedistortion or rateaccuracy performance. Our work introduces two modes of optimization – universal and perimage. The universal mode of work targets a single set of tables for all training images, so as to replace the default ones. The perimage model assigns the tables for each image, further boosting the ratedistortion behavior. In our future work we aim to train a deep neural network that produces the best quantization tables for any incoming image, this way producing a highly effective perimage treatment. In the context of rateaccuracy behavior, the perimage mode we present here will stand as an upperbound to the achievable results. Our future plans also include a similar treatment of other compression standards.
6 Appendix
Table 1 provide the universally optimized quantization tables for ratedistortion. The optimized tables are computed with chroma subsampling, using the hyperparameters reported in Section 4. The values of the universally optimized are reported in floats. To produce the ratedistortion (accuracy) curves Section 4, the tables are first scaled, and then rounded and clipped to before passing to the JPEG encoder.
In Table 1, the learned tables for ratedistortion quantize the chroma channels much less compared to the default tables. We note that this is due to the default tables being designed for the Human Visual System (HVS) instead of image distortion, since HVS is less sensitive to the chroma channel. In fact, our framework can easily adaptive to optimizing for visual perception by adding perceptual losses [19], or by manually weighting the luma and chroma bitrate losses in Equation 2. Experimentation for perceptual metrics is outside the scope of this paper, and could be an interesting direction for future work.
References
 [1] (201704) Endtoend optimized image compression. In Int’l. Conf. on Learning Representations (ICLR2017), Toulon, France. Cited by: §3.2.

[2]
(2018)
Variational image compression with a scale hyperprior
. arXiv preprint arXiv:1802.01436. Cited by: §3.2, §3.2.  [3] (2018) Tensorflow compression.. Note: https://github.com/tensorflow/compression Cited by: §3.2.
 [4] (2019) Quannet: joint image compression and classification over channels with limited bandwidth. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 338–343. Cited by: §2.
 [5] (2019) Faster and accurate classification for jpeg2000 compressed images in networked applications. arXiv preprint arXiv:1909.05638. Cited by: §2.
 [6] (2002) A steganographic method based upon jpeg and quantization table modification. Information Sciences 141 (12), pp. 123–138. Cited by: §1, §2.
 [7] (2013) On the design of a novel jpeg quantization table for improved feature detection performance. In 2013 IEEE International Conference on Image Processing, pp. 1675–1679. Cited by: §1, §2.

[8]
(2005)
Identification of the best quantization table using genetic algorithms
. In PACRIM. 2005 IEEE Pacific Rim Conference on Communications, Computers and signal Processing, 2005., pp. 570–573. Cited by: §1, §2.  [9] (2012) Elements of information theory. John Wiley & Sons. Cited by: §1.
 [10] (2012) Optimizing jpeg quantization table for low bit rate mobile visual search. In 2012 Visual Communications and Image Processing, pp. 1–6. Cited by: §1, §2.
 [11] (2006) Digital image ballistics from jpeg quantization. Dept. Comput. Sci., Dartmouth College, Tech. Rep. TR2006583. Cited by: §1, §2.
 [12] (1997) Designing jpeg quantization matrix using ratedistortion approach and human visual system model. In Proceedings of ICC’97International Conference on Communications, Vol. 3, pp. 1659–1663. Cited by: §1, §2.
 [13] (1995) Design of imageadaptive quantization tables for jpeg. Journal of Electronic Imaging 4 (2), pp. 144–151. Cited by: §1, §2, §4.2.1.

[14]
(2016)
Identity mappings in deep residual networks.
In
European conference on computer vision
, pp. 630–645. Cited by: §1, §4.3.  [15] (2017) Simulated annealing for jpeg quantization. arXiv preprint arXiv:1709.00649. Cited by: §1, §2.
 [16] (2019) Searching for mobilenetv3. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314–1324. Cited by: §1, §4.3.
 [17] (2006) JPEG quantization table design for face images and its application to face recognition. IEICE transactions on fundamentals of electronics, communications and computer sciences 89 (11), pp. 2990–2993. Cited by: §1, §2.
 [18] (2011) JPEG image compression using quantization table optimization based on perceptual image quality assessment. In 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp. 225–229. Cited by: §1, §2.

[19]
(2016)
Perceptual losses for realtime style transfer and superresolution
. In European conference on computer vision, pp. 694–711. Cited by: §6.  [20] (2009) Evolutionary optimization of jpeg quantization tables for compressing iris polar images in iris recognition systems. In 2009 Proceedings of 6th International Symposium on Image and Signal Processing and Analysis, pp. 534–539. Cited by: §1, §2.
 [21] (2020) Optimizing jpeg quantization for classification networks. arXiv preprint arXiv:2003.02874. Cited by: §2, §4.3.1.
 [22] (2018) Libjpegturbo library. Note: https://libjpegturbo.org/ Cited by: §4.1, §4.2.
 [23] (2019) Transferable recognitionaware image processing. arXiv preprint arXiv:1910.09185. Cited by: §2.
 [24] (2018) DeepNjpeg: a deep neural network favorable jpegbased image compression framework. In Proceedings of the 55th Annual Design Automation Conference, pp. 1–6. Cited by: §2.
 [25] (2012) Gradient preserving quantization. In 2012 19th IEEE International Conference on Image Processing, pp. 2505–2508. Cited by: §1, §2.
 [26] (2018) SJPEG library. Note: https://github.com/webmproject/sjpeg Cited by: §4.2.1.
 [27] (1993) Optimum dct quantization. In [Proceedings] DCC93: Data Compression Conference, pp. 188–194. Cited by: §1, §2.
 [28] (1995) RDopt: an efficient algorithm for optimizing dct quantization tables. In Proceedings DCC’95 Data Compression Conference, pp. 332–341. External Links: Link Cited by: §1, §2.
 [29] (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §4.
 [30] (1994) JPEG compliant encoder utilizing perceptually based quantization. In Human Vision, Visual Processing, and Digital Display V, Vol. 2179, pp. 117–126. Cited by: §1, §2.

[31]
(2017)
JPEGresistant adversarial images.
In
NIPS 2017 Workshop on Machine Learning and Computer Security
, Cited by: item 4, §3.1.  [32] (1999) Optimization of quantization table based on visual characteristics in dct image coding. Computers & Mathematics with Applications 37 (1112), pp. 225–232. Cited by: §1, §2.
 [33] (2019) Image pretransformation for recognitionaware image compression. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2686–2690. Cited by: §2.
 [34] (2020) Better compression with deep preediting. arXiv preprint arXiv:2002.00113. Cited by: §2.
 [35] (2017) JPEG quantization table optimization by guided fireworks algorithm. In International Workshop on Combinatorial Image Analysis, pp. 294–307. Cited by: §1, §2.
 [36] (2001) Designing jpeg quantization tables based on human visual system. Signal Processing: Image Communication 16 (5), pp. 501–506. Cited by: §1, §2.
 [37] (1993) DCT quantization matrices visually optimized for individual images. In Human vision, visual processing, and digital display IV, Vol. 1913, pp. 202–216. Cited by: §1, §2.
 [38] (1993) Rateconstrained pictureadaptive quantization for jpeg baseline coders. In 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5, pp. 389–392. Cited by: §1, §2.
 [39] (2008) Joint optimization of runlength coding, huffman coding, and quantization table with complete baseline jpeg decoder compatibility. IEEE Transactions on Image Processing 18 (1), pp. 63–74. Cited by: §1, §2.
 [40] (2016) Justnoticeable differencebased perceptual optimization for jpeg compression. IEEE Signal Processing Letters 24 (1), pp. 96–100. Cited by: §1, §2.
Comments
There are no comments yet.