The Rate-Distortion-Accuracy Tradeoff: JPEG Case Study

08/03/2020 ∙ by Xiyang Luo, et al. ∙ 3

Handling digital images is almost always accompanied by a lossy compression in order to facilitate efficient transmission and storage. This introduces an unavoidable tension between the allocated bit-budget (rate) and the faithfulness of the resulting image to the original one (distortion). An additional complicating consideration is the effect of the compression on recognition performance by given classifiers (accuracy). This work aims to explore this rate-distortion-accuracy tradeoff. As a case study, we focus on the design of the quantization tables in the JPEG compression standard. We offer a novel optimal tuning of these tables via continuous optimization, leveraging a differential implementation of both the JPEG encoder-decoder and an entropy estimator. This enables us to offer a unified framework that considers the interplay between rate, distortion and classification accuracy. In all these fronts, we report a substantial boost in performance by a simple and easily implemented modification of these tables.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 10

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Digital images are almost always compressed, exploiting their massive spatial and statistical redundancies in order to save storage space and/or transmission rates. The common practice is to use standard lossy coding formats, such as JPEG, JPEG-2000, HEIF, or others. Lossy compression implies a permitted deviation between the resulting compressed-decompressed image and its original version. This error can be controlled by the bit-budget given to the image, creating the well-known rate-distortion tradeoff, which is at the very foundation of information theory [9].

If these images are to be fed to a classification machine for recognition purposes, the compression distortion may induce errors in the decisions made. In such scenarios we are to consider three performance measures that are at odds with each other: rate, distortion, and classification accuracy. This work focuses on this rate-distortion-accuracy, tradeoff, aiming to show that improved compression performance are within reach while preserving the standard coding paradigm.

As a case study, our paper focuses on JPEG compression. Among the various available image coding methods, JPEG holds a unique status, being the most commonly used and widely spread. This image format111JPEG is a decompression format, leaving some freedom in the design of the encoder. is the de-facto default in digital cameras and cell-phones, in all browsers, and in every image editing software package. JPEG’s popularity could be attributed to its relative simplicity, hardware friendliness, reasonable rate-distortion performance, and beyond all these, the perfect timing it had in getting to the market. And so, while much better-performing compression algorithms are already available, JPEG’s dominance of the market does not seem to be challenged in the near future.

This popularity has motivated past and present attempts to extract the best performance from JPEG while preserving its essence. In this work we target the choice of the two quantization tables used within the JPEG coding process (see Figure 2). The Luma and the Chroma are quantized in the DCT domain while operating on blocks. The relative quantization step-sizes for each coefficient are stored in these two tables. Most JPEG packages offer default values, and many vendors adopt these as is. Are these default tables the best possible ones? As we show in this paper, the answer is negative, and room exists for an improvement of JPEG by redesigning these tables in various ways.

Figure 2: JPEG for an image x: In the compression pass, after color conversion, an optional chroma downsampling is applied. Then, DCT is computed on blocks for each color channel. The DCT coefficients are quantized using the table p and a scalar factor . Going back through each of these steps results in the decompressed image . Our framework relies on the differentiability of all the JPEG steps in order to enable continuous optimization of the quantization tables.

Previous work from the early 90’s and since have already identified the potential in better designing the JPEG quantization tables, considering various approaches [11, 6, 27, 38, 28, 13, 12, 8, 39, 40, 35, 15, 37, 30, 32, 36, 18, 17, 20, 10, 25, 7]. The main effort has been directed to rate-distortion performance improvement, using derivative-free optimization techniques. More recent work also considered tuning these tables for better recognition results. More on these is described in Section 2. While addressing the same general goals, the approach we take in this paper is markedly different. We offer a continuous optimization strategy for tuning these two tables, while considering the above-mentioned three performance measures: rate, distortion and accuracy.

Our work considers two different design goals and two modes of optimization. As for the design goals, we consider both rate-distortion and rate-accuracy objectives – the first aims to set the quantization tables for getting the smallest -error after compression-decompression for any given bit rate, while the second sets those tables so as to provide the most accurate recognition-rates. We address these goals by considering two optimization setups: universal and per-image modes of work. In the universal case we optimize the choice of the quantization tables for a large corpus of images, essentially proposing a replacement to the commonly-used default values. The second setup aims to fit the best tables for each image so as to extract better JPEG performance.

Broadly speaking, we formulate each of the above design problems as a non-convex yet smooth optimization task, where the loss to be minimized varies from one case to another. In all cases, the JPEG encoding, decoding and its bit-rate evaluation are all replaced with differentiable proxies. In the classification case, the loss includes a penalty for the accuracy of ResNet [14] (or MobileNet [16]

) over the ImageNet dataset. The optimization itself is performed using a mini-batch gradient descent algorithm, and a use of back-propagation.

Extensive experiments presented in this paper expose the surprising ability to substantially improve JPEG performance in the two considered scenarios. See Figure 1 for two illustrative examples. In terms of rate-distortion, we show a gain of up to 25% in file-size while maintaining the same image quality (measured in PSNR). Similarly, we show an ability for an increase of up to in classification accuracy. Our experiments show that the optimized tables for MobileNet are just as effective for ResNet, implying that one optimized set may serve various recognition/classification architectures. We note that our overall methodology could easily be fitted to other compression standards by constructing their differentiable implementation and defining their parameters to be tuned.

2 Related Work

The important role that the quantization tables play in JPEG has been recognized and exploited in past work for forensics, steganography and more (e.g. [11, 6]). In this paper we focus on improving JPEG performance by re-tuning these tables, a topic that has been investigated in past work as well. In the following we briefly account for the relevant literature on this subject, emphasizing the objectives targeted and the means (i.e. algorithms) for getting their results.

An obvious and expected line of work has dealt with a direct attempt to improve JPEG rate-distortion performance [27, 38, 28, 13, 12, 8, 39, 40, 35, 15]. Papers offering such a treatment differ mainly in the optimization strategy adopted, as the techniques used include simulated annealing [27, 15], coordinate-descent [38, 13, 12, 39], dynamic programming [28]

, genetic and evolutionary algorithms 

[8], exhaustive separable search [40] and a swarm intelligence method [35]. Note that all these methods employ derivative-free optimization strategies due to the complex end-to-end function being treated. The work reported in [13] stands out in this group, as it uses the coordinate-descent approach for targeting an image-adaptive adjustment of the quantization tables.

A related line of activity tunes the quantization tables for better visual quality or improved matching to the human visual system  [37, 30, 32, 36, 18]. The core idea behind these papers is to optimize the tables while observing the output quality, assessed either via a simplified model of the human visual system, or by relying on subjective tests.

A recent group of papers has been looking at ways to adjust JPEG such that recognition tasks are better served [17, 20, 10, 25, 7, 21]. These papers span a range of decision tasks and optimization techniques. [17]

uses a direct rate-distortion optimization on a dedicated face image dataset in order to better handle face recognition. Both

[20] and [10] use an evolutionary algorithm, the first for better recognition of eye iris images, and the second optimized for visual search results via pairwise image matching. [25, 7]

consider general scale-space feature detection accuracy, and optimize the quantization tables using simple frequency domain considerations. In this context, we also mention a parallel body of work that touches on the same goal of improving classification results, referring to alternative compression methods 

[24, 4, 5].

Our work differs from the above in two distinct ways. First, as we replace JPEG with a differentiable proxy, we can use continuous optimization methods. Adopting a deep-learning point of view, we use the mini-batch gradient descent and back-propagation, which provide a better potential to reach deeper minima values. Second, our treatment is general, fusing the above and more modes of design into one holistic scheme. Indeed, our work could be considered as an extension of the broad view in

[23, 33, 34] that proposed an optimization of a general image pre-processing stage while using a recognition-related or other losses.

3 The Proposed Methodology

We now describe our methodology for the optimal design of the quantization tables. We start by introducing our notations.

3.1 Differentiable JPEG

We denote by the JPEG compression-decompression of the image using quality factor and quantization tables . The compression-decompression process is illustrated in Figure 2.

is used within our loss function and thus it should be differentiable. Our implementation

222Our implementation refers to the YUV420 and YUV444 Luma-Chroma sub-sampling, but it can easily be adapted to alternative options. follows the one reported in [31]. Next we explain each step of the differentiable JPEG encoder/decoder shown in Figure 2.

  1. Color conversion: An RGB image is converted YUV color space. Since this color conversion is a matrix multiplication, its derivatives are well defined.

  2. Chroma downsampling/upsampling: The YUV image can represent full chroma (YUV444), or subsampled CHROMA values (YUV420). The downsampling operation is a

    average pooling (YUV444 to YUV420), and the upsampling is implemented with Bilinear interpolation (YUV420 to YUV444).

  3. DCT and inverse DCT: The DCT coefficients () are computed for image blocks of each YUV color channel, separately. Note that the DCT operation and its inverse are matrix multiplications, and hence differentiable.

  4. Quantization/Dequantization: The DCT quantization is performed by tables . Note that rounding operation has zero derivative almost everywhere, and consequently can not be used in our gradient-based learning framework. To alleviate this problem, as Shin et al [31] suggested, a third-order polynomial approximation of the rounding operation as can be used.

3.2 Entropy Prediction

We define the function that returns an estimate of the bit-rate consumed for JPEG compressing of using quality factor and quantization tables . Recall that when using JPEG with a fixed quality factor, the bit-rate is unknown as it depends on the input image in a non-trivial way. For the function , we use the entropy estimator proposed in [2], which operates on the quantized DCT coefficients.

The approximated entropy can be expressed as

(1)

where represents the quantized DCT coefficients, and

denotes the probability mass function. As shown in

[1], is a continuous relaxation of the probability mass function , where is additive i.i.d. uniform noise with the same minimum and maximum as the quantization bins. This means that the differential entropy of can be used as an approximation of . As suggested in [2], the density function

can be closely approximated by a non-parametric model consisting of a 3-layer neural network with 3 channels for each hidden layer followed by a Sigmoid non-linearity. Details of this approach are discussed in

[2] and an online implementation is available in [3].

To imitate the actual JPEG encoder, we employ separate entropy estimators for each DCT channel (Luma and Chroma) and DC/AC coefficients (zero frequency and none-zero frequencies). This means that a total of four entropy estimators are trained in our framework. The overall entropy is the sum of these estimated entropies. We also use the DPCM (Differential Pulse Code Modulation) approach to encode the difference of adjacent DC components in JPEG blocks. Let be the -th DCT component () of the -th block (, where is the number of blocks) for the Luma () and Chroma () channels respectively, denote the approximate entropy parameterized by , we have

(2)

where , , are the estimated bit-rate for the Luma / Chroma channels respectively. The overall estimated bit-rate is given by .

Results from our entropy predictor and the actual JPEG bit-rates are presented in Figure 3. To generate these results, we randomly sampled 100 images from ImageNet test set and varied the JPEG quality factor in range. These results show a strong linear correlation of between the estimated and actual BPPs. We also observed that for BPP, our method slightly over-estimates the actual bit-rate.

Figure 3: Estimated BPP vs actual BPP generated from 100 sample images with varying quality factors. The Pearson correlation is .

3.3 Classification Loss

The function denotes the standard softmax cross entropy for predictions from the image with respect to the reference label . This function’s evaluation includes within it the activation of the classification network in order to produce the predicted label, and the chosen loss between this label and the reference one.

3.4 Overall Training Loss

Using all the above differentiable ingredients, our loss function per image and the quality factor is given by the following continuous function:

(3)

where and are three weight coefficients governing the importance of the rate, distortion and classification losses, respectively. By modifying these weights we change the design goal of our optimization. Minimizing this function with respect to via mini-batch gradient descent and back-propagation leads to the designed optimal quantization tables.

The above loss refers to the per-image case, where the quantization tables are best fitted for a single image and a specific quality factor . We should note that while this mode of operation makes perfect sense for the rate-distortion optimization, it is impractical when classification is involved (i.e. if ). This is due to the need to have the ground-truth label for the image in the optimization loss, information that is unavailable when a new image is given. Still, such a design goal for the quantization tables is of interest as it sets an upper-bound on the attainable accuracy when these tables are somehow image-adapted.

When handling a corpus of images and working in a range of quality factors , the loss function will simply be a summation over these domains,

4 Results

In this section we discuss our experimental results. We optimize and evaluate our framework on the ImageNet benchmark [29]. In the following we first give a detailed overview of the experiment setup, and then go over the results for rate-distortion and rate-accuracy. We report our results from optimizing a single table-pair for all images (universal), and also the per-image case. Ablation studies are also included to further elaborate on parameter choices made. The optimized quantization tables for each task are presented in the Appendix section.

4.1 Experimental Setup

For all experiments we use images of size pixels (resized using bilinear interpolation) so as to be consistent with the input dimensions used for training the classification networks used hereafter. The quantization tables are optimized by minimizing the objective in Eq. (3.4). To evaluate the performance with the obtained tables, we generate the rate-distortion (or accuracy) curves by scaling the tables by a range of quality factors. The scaled tables are rounded and clipped to before being used for compression. All images are compressed with libjpeg-turbo [22]. For each fixed quality factor, we average the bits-per-pixel (BPP), and the PSNR (or classification accuracy) to produce one point on the rate-distortion (or -accuracy) curves. For the universal case, the tables are optimized on the ImageNet training set, and evaluated on the eval set. For the per-image case, we randomly choose images from the eval set and directly optimize on these. We note that the per-image case computes a single table for each image, and does not need to generalize to other images.

We tune the weights and (or ) in Eq. (3.4) so as to achieve the best overall performance across the full range of bit rates. The effect of adjusting these weights will be shown in the ablation study in Section 4.4. For all experiments, we set the range of quality factors to be randomly sampled from at optimization time. All models are optimized for 1M steps using ADAM with a learning rate of , and initialized with the default JPEG quantization tables. Also, our optimization batch size is set as across all experiments.

4.2 Rate-Distortion Optimized Performance

This start by presenting the results for the rate-distortion optimization task. For this experiment the loss weights in Eq. (3.4) are set as and . The optimized rate-distortion curves are shown in Figure 4, considering both YUV420 and no chroma sub-sampling (YUV444). As can be seen, compression with the proposed quantization tables leads to a consistent improvement in PSNR for a fixed bit-rate. This improvement is more significant for . For instance, at bits-per-pixel, our customized tables outperform the default ones by nearly dB. Conversely, our tables achieve a reduction in file size while maintaining the same image quality at PSNR=dB. Samples of compressed images using the optimized quantization tables and the default ones are shown in Figure 5. As can be seen, the found quantization tables reduce the file-size of both images while preserving the image quality, assessed via PSNR. All images are compressed with the libjpegturbo library[22].

Figure 4: PSNR vs. Bits-Per-Pixel (BPP) for the universally optimized quantization tables compared to the default tables evaluated on ImageNet. Left: YUV 420. Right: YUV 444.

4.2.1 Comparison With SJPEG

We compare our method with SJPEG [26]

, an open-source JPEG compression library that supports image-adaptive optimizations of JPEG quantization tables. On a high-level, SJPEG performs coordinate descent to minimize the rate-distortion objective

, similar to [13], where is dynamically chosen to approximate the slope of the current point on the R-D curve. Figure 5 shows our universal method outperforms SJPEG, especially in the range of BPP. We emphasize that SJPEG [26] adapts the quantization table per image and per quality value, whereas our method uses a single table for all quality values and images. For a fair comparison, we only use SJPEG to optimize the quantization table, and apply libjpegturbo with the optimized tables from SJPEG for the final stats.

Default Ours Default Ours
PSNR=dB PSNR=dB PSNR=dB PSNR=dB
BPP= BPP= () BPP= BPP= ()
PSNR=dB PSNR=dB PSNR=dB PSNR=dB
BPP= BPP= () BPP= BPP= ()
Figure 5: Examples showing the effect obtained by our optimized quantization tables versus the default ones for the rate-distortion optimization. In all four examples the PSNR remains almost unchanged, while the rates show a significant savings. All cases refer to (quality factor) used in the default case.

4.2.2 Per-Image Optimized Tables

Next, we compare the performance of the universally optimized tables with the per-image obtained ones. The hyperparameters

and are set exactly as in the universal case and the optimization uses the same parameters. The results are evaluated on the same set of randomly sampled images from the ImageNet validation set, which were used also in the universal and the default tests, and the shown rate-distortion curve is an average over this image set.

(a) (b)
Figure 6: (a) Rate-distortion curves for the default, the universally optimized and the per-image optimized tables, all evaluated on a randomly sampled subset of images from the ImageNet eval set. b) A comparison of our universal tables with SJPEG, both evaluated on the ImageNet eval set. These graphs correspond to YUV420 chroma sub-sampling.

As shown in Figure 6, optimizing the quantization tables on a per-image basis leads to a further improvement of around dB compared to the universal case for lower bit rates, and around dB for higher bit rates.

4.3 Classification Optimized Performance

In the context of rate-accuracy (R-C) performance, we report the results of our method on three networks: ResNet-V2-101 [14], ResNet-V2-50 [14], and MobileNetV3 [16]. To optimize the JPEG quantization tables for a classification task, we set the loss weights in Eq. (3.4) to , , and set . All tables are optimized on the ImageNet training set, and tested on its evaluation set.

Figure 7 presents the rate-accuracy curves for the default and universal tables. The attained gains in accuracy is also plotted for easier visualization. We note that the gain is larger for MobileNetV3, likely due to the fact that it has a smaller network capacity compared to ResNet, and hence is more sensitive to compression artifacts.

(a) BPP vs Accuracy for default table (b) Accuracy Gain
Figure 7: Classification Accuracy gain versus Bits-per-pixel (BPP) for YUV420 chroma sub-sampling. Left: Accuracy vs BPP for the default quantization table. Right: Accuracy gain of the universal table across a range of BPPs.

Figure 8 brings three examples where our optimized tables fix an incorrect classification, while using a lower bit-rate. The probabilities reported are obtained from the classification network, representing the confidence in the decisions made. Though visually similar, we can see (best viewed zoomed in) that images from the obtained tables enhance certain texture regions of the image.

Default Ours Default Ours
“swab” “broom” “Dinmont” “Norfolk terrier”
BPP = BPP = BPP =1.649 BPP =1.683
Prob = Prob = Prob = Prob =
Figure 8: Examples of wrong classification corrected by our optimized tables. is set to for the default table and tuned for the custom table such that the file size is slightly smaller. The predicted label, BPP, as well as the probability scores are shown below.

4.3.1 Comparison with Sorted Random Search (SRS)

Sorted Random Search [21] is a simple yet effective method for optimizing the quantization tables for classification. For a fair comparison, we randomly search for 1000 tables as in [21], and randomly sample 5000 images from the training set for each candidate’s rate-accuracy evaluation. This ensures that SRS has seen roughly the same number of examples as our mini-batch optimization. We choose a Pareto optimal table closest to 0.8 BPP for MobileNetV3. Figure 9 (b) shows that our table outperforms SRS on a range of BPP values. A possible explanation is that the SRS method needs a sufficiently large and well-sampled (such as the MatchedFrequency strategy used in [21]

) set per-epoch, whereas our method fully takes advantange of the benefits of mini-batch back-propagation and results in better performance.

4.3.2 Per-Image Results

We turn to present the per-image results. The same subset of 1024 images as in Section 4.2 is used for evaluation. Figure 9 shows a large increase in terms of accuracy. As already mentioned in Section 3, the recognition loss for the per-image tables contains the ground truth label of the image, and hence the per-image method is not a practical approach (unless the ground truth label is available, such as in identification tests). Nevertheless, the result still has value as it shows the performance limit of adapting the JPEG quantization table for the purpose of boosting the classification accuracy.

(a) (b)
Figure 9: Left: Bits-per-pixel vs classification accuracy for per-image and universal tables. Reported results are on 1024 randomly sampled images from the ImageNet evalset, with chroma sub-sampling. Right: Accuracy of the default table, SRS, and our method on 10k randomly sampled images from the ImageNet eval set.
Figure 10: Generalization of optimized tables to other networks: These graphs show the accuracy gain (over the default tables) with ResNet101 and ResNet50 while using tables optimized for MobileNetV3. All results are evaluated on the full ImageNet eval set.

4.3.3 Generalization to Other Networks

Next, we show that the accuracy performance generalizes across different networks. Figure 10 illustrates the classification accuracy against BPP evaluated using ResNet101, for a quantization tables optimized on MobileNetV3. Observe that the classification accuracy improves across a wide range of rates, even though the table was computed with a different network loss.

4.4 Ablation Study

Figure 11 (a) shows the effect of adjusting the hyperparameter relative to for rate-distortion optimization. We vary from to , and fix . We see that for larger values of , the quantization table in general performs better for lower bit-rates, and worse for higher ones. We also observe that the performance is not particularly sensitive to these hyperparameters, as indicated by the proximity of the R-D curves for and . All experiments are optimized and evaluated under the same setup (except ) as in Section 4.2, with Chroma sub-sampling.

In Figure 11 (b)-(d), a similar analysis on the loss weights is provided by varying and fixing , while considering three classification networks – MobileNetV3, ResNet50 and ResNet101. Referring to MobileNetV3, we see that larger values of result in a higher gain for lower BPP, and less so when the BPP is high.

(a) Rate-Distortion (b) Rate-Accuracy (MobileNetV3)
(c) Rate-Accuracy (ResNet50) (d) Rate-Accuracy (ResNet101)
Figure 11: Varying the rate weight in the loss-function for rate-distortion optimization and classification tasks.

5 Conclusion

Lossy image compression methods trade file-size (rate) for faithfulness to the original image content (distortion). A third performance measure influenced by such coding is classification accuracy, as affected by the induced error in the image. In this paper we offer an investigation of the tradeoff between these three performance measures, analyzed in the context of the JPEG compression algorithm. We show that JPEG’s quantization tables can be optimized for better rate-distortion or rate-accuracy performance. Our work introduces two modes of optimization – universal and per-image. The universal mode of work targets a single set of tables for all training images, so as to replace the default ones. The per-image model assigns the tables for each image, further boosting the rate-distortion behavior. In our future work we aim to train a deep neural network that produces the best quantization tables for any incoming image, this way producing a highly effective per-image treatment. In the context of rate-accuracy behavior, the per-image mode we present here will stand as an upper-bound to the achievable results. Our future plans also include a similar treatment of other compression standards.

6 Appendix

Table 1 provide the universally optimized quantization tables for rate-distortion. The optimized tables are computed with chroma sub-sampling, using the hyperparameters reported in Section 4. The values of the universally optimized are reported in floats. To produce the rate-distortion (accuracy) curves Section 4, the tables are first scaled, and then rounded and clipped to before passing to the JPEG encoder.

In Table 1, the learned tables for rate-distortion quantize the chroma channels much less compared to the default tables. We note that this is due to the default tables being designed for the Human Visual System (HVS) instead of image distortion, since HVS is less sensitive to the chroma channel. In fact, our framework can easily adaptive to optimizing for visual perception by adding perceptual losses [19], or by manually weighting the luma and chroma bit-rate losses in Equation 2. Experimentation for perceptual metrics is outside the scope of this paper, and could be an interesting direction for future work.

16.0 14.9 14.2 14.8 15.6 17.7 18.9 20.0 15.1 14.5 14.5 14.9 15.6 19.8 19.7 18.7 14.8 14.3 14.5 15.4 17.3 19.3 20.7 18.6 14.4 14.6 15.1 15.8 18.5 23.2 21.9 19.2 14.6 14.9 16.8 19.1 20.4 26.1 24.9 21.0 15.1 16.3 18.8 19.8 21.9 25.0 26.1 22.9 18.3 19.9 21.6 22.6 24.7 27.2 26.8 23.9 21.3 23.6 23.8 23.9 25.8 23.7 24.0 23.3 14.3 14.9 14.7 16.9 23.5 22.8 22.3 21.9 14.9 14.1 13.9 18.6 22.7 22.0 21.6 21.3 14.6 13.9 17.2 22.8 22.2 21.6 21.2 21.0 16.8 18.5 22.8 22.3 21.7 21.2 20.8 20.6 23.4 22.6 22.1 21.7 21.2 20.7 20.4 20.3 22.6 21.9 21.5 21.1 20.7 20.3 20.0 19.9 22.1 21.4 21.1 20.7 20.3 20.0 19.8 19.7 21.8 21.1 20.8 20.5 20.2 19.9 19.7 19.6
Table 1: Universally optimized tables for rate-distortion performance. Left: Luma, Right: Chroma.

References

  • [1] J. Ballé, V. Laparra, and E. P. Simoncelli (2017-04) End-to-end optimized image compression. In Int’l. Conf. on Learning Representations (ICLR2017), Toulon, France. Cited by: §3.2.
  • [2] J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston (2018)

    Variational image compression with a scale hyperprior

    .
    arXiv preprint arXiv:1802.01436. Cited by: §3.2, §3.2.
  • [3] J. Ballé (2018) Tensorflow compression.. Note: https://github.com/tensorflow/compression Cited by: §3.2.
  • [4] L. D. Chamain, S. S. Cheung, and Z. Ding (2019) Quannet: joint image compression and classification over channels with limited bandwidth. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 338–343. Cited by: §2.
  • [5] L. D. Chamain and Z. Ding (2019) Faster and accurate classification for jpeg2000 compressed images in networked applications. arXiv preprint arXiv:1909.05638. Cited by: §2.
  • [6] C. Chang, T. Chen, and L. Chung (2002) A steganographic method based upon jpeg and quantization table modification. Information Sciences 141 (1-2), pp. 123–138. Cited by: §1, §2.
  • [7] J. Chao, H. Chen, and E. Steinbach (2013) On the design of a novel jpeg quantization table for improved feature detection performance. In 2013 IEEE International Conference on Image Processing, pp. 1675–1679. Cited by: §1, §2.
  • [8] L. F. Costa and A. C. P. Veiga (2005)

    Identification of the best quantization table using genetic algorithms

    .
    In PACRIM. 2005 IEEE Pacific Rim Conference on Communications, Computers and signal Processing, 2005., pp. 570–573. Cited by: §1, §2.
  • [9] T. M. Cover and J. A. Thomas (2012) Elements of information theory. John Wiley & Sons. Cited by: §1.
  • [10] L. Duan, X. Liu, J. Chen, T. Huang, and W. Gao (2012) Optimizing jpeg quantization table for low bit rate mobile visual search. In 2012 Visual Communications and Image Processing, pp. 1–6. Cited by: §1, §2.
  • [11] H. Farid (2006) Digital image ballistics from jpeg quantization. Dept. Comput. Sci., Dartmouth College, Tech. Rep. TR2006-583. Cited by: §1, §2.
  • [12] W. Fong, S. Chan, and K. Ho (1997) Designing jpeg quantization matrix using rate-distortion approach and human visual system model. In Proceedings of ICC’97-International Conference on Communications, Vol. 3, pp. 1659–1663. Cited by: §1, §2.
  • [13] H. T. Fung and K. J. Parker (1995) Design of image-adaptive quantization tables for jpeg. Journal of Electronic Imaging 4 (2), pp. 144–151. Cited by: §1, §2, §4.2.1.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Identity mappings in deep residual networks. In

    European conference on computer vision

    ,
    pp. 630–645. Cited by: §1, §4.3.
  • [15] M. Hopkins, M. Mitzenmacher, and S. Wagner-Carena (2017) Simulated annealing for jpeg quantization. arXiv preprint arXiv:1709.00649. Cited by: §1, §2.
  • [16] A. Howard, M. Sandler, G. Chu, L. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al. (2019) Searching for mobilenetv3. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314–1324. Cited by: §1, §4.3.
  • [17] G. Jeong, C. Kim, H. Ahn, and B. Ahn (2006) JPEG quantization table design for face images and its application to face recognition. IEICE transactions on fundamentals of electronics, communications and computer sciences 89 (11), pp. 2990–2993. Cited by: §1, §2.
  • [18] Y. Jiang and M. S. Pattichis (2011) JPEG image compression using quantization table optimization based on perceptual image quality assessment. In 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp. 225–229. Cited by: §1, §2.
  • [19] J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution

    .
    In European conference on computer vision, pp. 694–711. Cited by: §6.
  • [20] M. Konrad, H. Stogner, and A. Uhl (2009) Evolutionary optimization of jpeg quantization tables for compressing iris polar images in iris recognition systems. In 2009 Proceedings of 6th International Symposium on Image and Signal Processing and Analysis, pp. 534–539. Cited by: §1, §2.
  • [21] Z. Li, C. De Sa, and A. Sampson (2020) Optimizing jpeg quantization for classification networks. arXiv preprint arXiv:2003.02874. Cited by: §2, §4.3.1.
  • [22] (2018) Libjpeg-turbo library. Note: https://libjpeg-turbo.org/ Cited by: §4.1, §4.2.
  • [23] Z. Liu, T. Zhou, Z. Shen, B. Kang, and T. Darrell (2019) Transferable recognition-aware image processing. arXiv preprint arXiv:1910.09185. Cited by: §2.
  • [24] Z. Liu, T. Liu, W. Wen, L. Jiang, J. Xu, Y. Wang, and G. Quan (2018) DeepN-jpeg: a deep neural network favorable jpeg-based image compression framework. In Proceedings of the 55th Annual Design Automation Conference, pp. 1–6. Cited by: §2.
  • [25] M. Makar, H. Lakshman, V. Chandrasekhar, and B. Girod (2012) Gradient preserving quantization. In 2012 19th IEEE International Conference on Image Processing, pp. 2505–2508. Cited by: §1, §2.
  • [26] P. Massimino (2018) SJPEG library. Note: https://github.com/webmproject/sjpeg Cited by: §4.2.1.
  • [27] D. M. Monro and B. G. Sherlock (1993) Optimum dct quantization. In [Proceedings] DCC93: Data Compression Conference, pp. 188–194. Cited by: §1, §2.
  • [28] V. Ratnakar and M. Livny (1995) RD-opt: an efficient algorithm for optimizing dct quantization tables. In Proceedings DCC’95 Data Compression Conference, pp. 332–341. External Links: Link Cited by: §1, §2.
  • [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §4.
  • [30] R. J. Safranek (1994) JPEG compliant encoder utilizing perceptually based quantization. In Human Vision, Visual Processing, and Digital Display V, Vol. 2179, pp. 117–126. Cited by: §1, §2.
  • [31] R. Shin and D. Song (2017) JPEG-resistant adversarial images. In

    NIPS 2017 Workshop on Machine Learning and Computer Security

    ,
    Cited by: item 4, §3.1.
  • [32] T. Shohdohji, Y. Hoshino, and N. Kutsuwada (1999) Optimization of quantization table based on visual characteristics in dct image coding. Computers & Mathematics with Applications 37 (11-12), pp. 225–232. Cited by: §1, §2.
  • [33] S. Suzuki, M. Takagi, K. Hayase, T. Onishi, and A. Shimizu (2019) Image pre-transformation for recognition-aware image compression. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2686–2690. Cited by: §2.
  • [34] H. Talebi, D. Kelly, X. Luo, I. G. Dorado, F. Yang, P. Milanfar, and M. Elad (2020) Better compression with deep pre-editing. arXiv preprint arXiv:2002.00113. Cited by: §2.
  • [35] E. Tuba, M. Tuba, D. Simian, and R. Jovanovic (2017) JPEG quantization table optimization by guided fireworks algorithm. In International Workshop on Combinatorial Image Analysis, pp. 294–307. Cited by: §1, §2.
  • [36] C. Wang, S. Lee, and L. Chang (2001) Designing jpeg quantization tables based on human visual system. Signal Processing: Image Communication 16 (5), pp. 501–506. Cited by: §1, §2.
  • [37] A. B. Watson (1993) DCT quantization matrices visually optimized for individual images. In Human vision, visual processing, and digital display IV, Vol. 1913, pp. 202–216. Cited by: §1, §2.
  • [38] S. Wu and A. Gersho (1993) Rate-constrained picture-adaptive quantization for jpeg baseline coders. In 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5, pp. 389–392. Cited by: §1, §2.
  • [39] E. Yang and L. Wang (2008) Joint optimization of run-length coding, huffman coding, and quantization table with complete baseline jpeg decoder compatibility. IEEE Transactions on Image Processing 18 (1), pp. 63–74. Cited by: §1, §2.
  • [40] X. Zhang, S. Wang, K. Gu, W. Lin, S. Ma, and W. Gao (2016) Just-noticeable difference-based perceptual optimization for jpeg compression. IEEE Signal Processing Letters 24 (1), pp. 96–100. Cited by: §1, §2.