Learning Convolutional Networks for Content-weighted Image Compression

03/30/2017 ∙ by Mu Li, et al. ∙ Harbin Institute of Technology 0

Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

page 8

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image compression is a fundamental problem in computer vision and image processing. With the development and popularity of high-quality multimedia content, lossy image compression has been becoming more and more essential in saving transmission bandwidth and hardware storage. An image compression system usually includes three components, encoder, quantizer, and decoder, to form the codec. The typical image encoding standards, , JPEG and JPEG 2000, generally rely on handcrafted image transformation and separate optimization on codecs, and thus are suboptimal for image compression. Moreover, JPEG and JPEG 2000 perform poor for low rate image compression, and usually are inevitable in producing some visual artifacts, , blurring, ringing, and blocking.

Recently, deep convolutional networks (CNNs) have achieved great success in versatile vision tasks [8, 11, 5, 21, 4]

. As to image compression, CNN is also expected to be more powerful than JPEG and JPEG 2000 by considering the following reasons. First, for image encoding and decoding, flexible nonlinear analysis and synthesis transformations can be easily deployed by stacking several convolutional layers. Second, it allows to jointly optimize the nonlinear encoder and decoder in an end-to-end manner. Furthermore, several recent advances also validate the effectiveness of deep learning in image compression

[16, 17, 1, 15].

However, there are still several issues to be addressed in CNN-based image compression. In general, lossy image compression can be formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. Even the encoder and decoder can be represented as CNNs and be optimized via back-propagation, the learning of non-differentiable quantizer is still a challenge problem. Moreover, the system aims to jointly minimize both the compression rate and distortion, where entropy rate should also be estimated and minimized in learning. As a result of quantization, the entropy rate defined on discrete codes is also a discrete function, and a continuous approximation is required.

Figure 1: Illustration of the CNN architecture for content-weighted image compression.

In this paper, we present a novel CNN-based image compression framework to address the issues raised by quantization and entropy rate estimation. For the existing deep learning based compression models [16, 17, 1], the discrete code after quantization should first have the same length with the encoder output, and then compressed based on entropy coding. That is, the discrete code before entropy coding is spatially invariant. However, it is generally known that the local information content is spatially variant in an image. Thus, the bit rate should also be spatially variant to adapt to local information content. To this end, we introduce a content-weighted importance map to guide the allocation of local bit rate. Given an input image , let be the output of encoder network, which includes feature maps with size of . Denote by the non-negative importance map. Specifically, when , we will only encode the first -th feature maps at spatial location . Here, is the number of the importance level. And is the number of bits for each importance level. The other feature maps are automatically set with 0 and need not be saved into the codes. By this way, we can allocate more bits to the region with rich content, which is very helpful in preserving texture details with less sacrifice of bit rate. Moreover, the sum of the importance map will serve as a continuous estimation of compression rate, and can be directly adopted as a compression rate controller.

Benefited from importance map, we do not require to use any other entropy rate estimate in our objective, and can adopt a simple binarizer for quantization. The binarizer set those features with the possibility over 0.5 to 1 and others to 0. Inspired by the binary CNN [23, 12, 2], we introduce a proxy function for the binary operation in backward propagation and make it trainable. As illustrated in Figure 1, our compression framework consists of four major components: convolutional encoder, importance map network, binarizer, and convolutional decoder. With the introduction of continuous importance map and proxy function, all the components can be jointly optimized in an end-to-end manner.

Note that we do not include any term on entropy rate estimate in the training of the compression system. And the local spatial context is also not utilized. Therefore, we design a convolutional entropy coder to predict the current code with its context, and apply it to context-adaptive binary arithmetic coding (CABAC) framework [9] to further compress the binary codes and importance map.

Our whole framework is trained on a subset of the ImageNet database and tested on the Kodak dataset. In low bit rate image compression, our system achieves much better rate-distortion performance than JPEG and JPEG 2000 in terms of both quantitative metrics (, SSIM and MSE) and visual quality. More remarkably, the compressed images by our system are visually more pleasant in producing sharp edges, rich textures, and fewer artifacts. Compared with other CNN-based system [16, 17, 1], ours performs better in retaining texture details while suppressing visual artifacts.

To sum up, the main contribution of this paper is to introduce content-weighted importance map and binary quantization in the image compression system. The importance map not only can be used to substitute entropy rate estimate in joint rate-distortion optimization, but also can be adopted to guide the local bit rate allocation. By equipping with binary quantization and the proxy function, our compression system can be end-to-end trained, and obtain significantly better results than JPEG and JPEG 2000.

2 Related Work

For the existing image standards, , JPEG and JPEG 2000, the codecs actually are separately optimized. In the encoding stage, they first perform a linear transform to an image. Quantization and lossless entropy coding are then utilized to minimize the compression rate. For example, JPEG  

[18] applies discrete cosine transform (DCT) on image patches, quantizes the frequency components and compresses the quantized codes with a variant of Huffman encoding. JPEG 2000  [13] uses a multi-scale orthogonal wavelet decomposition to transform an image, and encodes the quantized codes with the Embedded Block Coding with Optimal Truncation. In the decoding stage, decoding algorithm and inverse transform are designed to minimize distortion. In contrast, we model image compression as a joint rate-distortion optimization, where both nonlinear encoder and decoder are jointly trained in an end-to-end manner.

Recently, several deep learning based image compression models have been developed. For lossless image compression, deep learning models have achieved state-of-the-art performance [14, 10]. For lossy image compression, Toderici  [16]

present a recurrent neural network (RNN) to compress

images. Toderici  [17] further introduce a set of full-resolution compression methods for progressive encoding and decoding of images. These methods learn the compression models by minimizing the distortion for a given compression rate. While our model is end-to-end trained via joint rate-distortion optimization.

The most related work is that of [1, 15]

based on convolutional autoencoders. Ballé  

[1] use generalized divisive normalization (GDN) for joint nonlinearity, and replace rounding quantization with additive uniform noise for continuous relaxation of distortion and entropy rate loss. Theis  [15] adopt a smooth approximation of the derivative of the rounding function, and upper-bound the discrete entropy rate loss for continuous relaxation. Our content-weighted image compression system is different with [1, 15] in rate loss, quantization, and continuous relaxation. Instead of rounding and entropy, we define our rate loss on importance map and adopt a simple binarizer for quantization. Moreover, the code length after quantization is spatially invariant in [1, 15]. In contrast, the local code length in our model is content-aware, which is useful in improving visual quality.

Our work is also related to binarized neural network (BNN) [2], where both weights and activations are binarized to or to save memory storage and run time. Courbariaux  [2] adopt a straight-through estimator to compute the gradient of the binarizer. In our compression system, only the encoder output is binarized to or , and a similar proxy function is used in backward propagation.

3 Content-weighted Image Compression

Our content-weighted image compression framework is composed of four components, convolutional encoder, binarizer, importance map network, and convolutional decoder. And Figure 1 shows the whole network architecture. Given an input image , the convolutional encoder defines a nonlinear analysis transform by stacking convolutional layers, and outputs . The binarizer assigns 1 to the encoder output higher than 0.5, and 0 to the others. The importance map network takes the intermediate feature maps as input, and yields the content-weighted importance map . The rounding function is adopted to quantize and generate a mask that has the same size of . The binary code is then trimmed based on . And the convolutional decoder defines a nonlinear synthesis transform to produce decoding result .

In the following, we first introduce the components of the framework and then present the formulation and learning method of our model.

3.1 Components and Gradient Computation

3.1.1 Convolutional encoder and decoder

Both the encoder and decoder in our framework are fully convolution networks and can be trained by back-propagation. The encoder network consists of three convolutional layers and three residual blocks. Following [6]

, each residual block has two convolutional layers. We further remove the batch normalization operations from the residual blocks. The input image

is first convolved with 128 filters with size

and stride 4 and followed by one residual block. The feature maps are then convolved with 256 filters with size

and stride 2 and followed by two residual blocks to output the intermediate feature maps . Finally, is convolved with filters with size to yield the encoder output . It should be noted that we set for low comparison rate models with less than bpp, and otherwise.

The network architecture of decoder is symmetric to that of the encoder, where is the code of an image . To upsample the feature maps, we adopt the depth to space operation mentioned in [17]. Please refer to the supplementary material for more details on the network architecture of the encoder and decoder.

3.1.2 Binarizer

Due to sigmoid nonlinearity is adopted in the last convolutional layer, the encoder output

should be in the range of . Denote by an element in . The binarizer is defined as

(1)

However, the gradient of the binarizer function is zero almost everywhere except that it is infinite when

. In the back-propagation algorithm, the gradient is computed layer by layer by using the chain rule in a backward manner. Thus, this will make any layer before the binarizer (, the whole encoder) never be updated during training.

Fortunately, some recent works on binarized neural networks (BNN) [23, 12, 2] have studied the issue of propagating gradient through binarization. Based on the straight-through estimator on gradient [2], we introduce a proxy function to approximate . Here, is still used in forward propagation calculation, while is used in back-propagation. Inspired by BNN, we adopt a piecewise linear function as the proxy of ,

(2)

Then, the gradient of can be easily obtained by,

(3)

3.1.3 Importance map

In [1, 15], the code length after quantization is spatially invariant, and entropy coding is then used to further compression the code. Actually, the difficulty in compressing different parts of an image should be different. The smooth regions in an image should be easier to be compressed than those with salient objects or rich textures. Thus, fewer bits should be allocated to the smooth regions while more bits should be allocated to the regions with more information content. For example, given an image with an eagle flying in the blue sky in Figure 2, it is reasonable to allocate more bits to the eagle and fewer bits to blue sky. Moreover, when the whole code length for an image is limited, such allocation scheme can also be used for rate control.

Figure 2: Illustration of importance map. The regions with sharp edge or rich texture generally have higher values and should be allocated more bits to encode.

We introduce a content-weighted importance map for bit allocation and compression rate control. It is a feature map with only one channel, and its size should be same with that of the encoder output. The value of importance map is in the range of . An importance map network is deployed to learn the importance map from an input image . It takes the intermediate feature maps from the last residual block of the encoder as input, and uses a network of three convolutional layers to produce the importance map .

Denote by the size of the importance map , and the number of feature maps of the encoder output. In order to guide the bit allocation, we should first quantize each element in to an integer no more than , and then generate an importance mask with size . Given an element in , the quantizer to importance map is defined as,

(4)

where L is the importance levels and . Each important level is corresponding to bits. As mentioned above, . Thus, and has only L kinds of different quantity value i.e. .

It should be noted that, indicates that zero bit will be allocated to this location, and all its information can be reconstructed based on its context in the decoding stage. By this way, the importance map can not only be treated as an alternative of entropy rate estimation but also naturally take the context into account.

With , the importance mask can then be obtained by,

(5)

The final coding result of the image can then be represented as , where denotes the element-wise multiplication operation. Note that the quantized importance map should also be considered in the code. Thus all the bits with can be safely excluded from . Thus, instead of , only bits are need for each location . Besides, in video coding, just noticeable distortion (JND) models [22] have also been suggested for spatially variant bit allocation and rate control. Different from [22], the importance map are learned from training data by optimizing joint rate-distortion performance.

Finally, in back-propagation, the gradient with respect to should be computed. Unfortunately, due to the quantization operation and mask function, the gradient is zero almost everywhere. Actually, the importance map can be equivalently rewritten as a function of ,

(6)

where is the ceiling function. Analogous to binarizer, we also adopt a straight-through estimator of the gradient,

(7)

3.2 Model formulation and learning

3.2.1 Model formulation

In general, the proposed content-weighted image compression system can be formulated as a rate-distortion optimization problem. Our objective is to minimize the combination of the distortion loss and rate loss. A tradeoff parameter is introduced for balancing compression rate and distortion. Let be a set of training data, and be an image from the set. Therefore, the objective function our model is defined as

(8)

where is the code of the input image . denotes the distortion loss and denotes the rate loss, which will be further explained in the following.

Distortion loss. Distortion loss is used to evaluate the distortion between the original image and the decoding result. Even better results may be obtained by assessing the distortion in the perceptual space. With the input image and decoding result , we simply use the squared error to define the distortion loss,

(9)

Rate loss. Instead of entropy rate, we define the rate loss directly on the continuous approximation of the code length. Suppose the size of encoder output is . The code by our model includes two parts: (i) the quantized importance map with the fixed size ; (ii) the trimmed binary code with the size . Note that the size of is constant to the encoder and importance map network. Thus can be used as rate loss.

Due to the effect of quantization , the function cannot be optimized by back-propagation. Thus, we relax to its continuous form, and use the sum of the importance map as rate loss,

(10)

For better rate control, we can select a threshold , and penalize the rate loss in Eqn. (10) only when it is higher than . And we then define the rate loss in our model as,

(11)

The threshold can be set based on the code length for a given compression rate. By this way, our rate loss will penalize the code length higher than , and makes the learned compression system achieve the comparable compression rate to the given one.

3.2.2 Learning

Benefited from the relaxed rate loss and the straight-through estimator of the gradient, the whole compression system can be trained in an end-to-end manner with an ADAM solver [7]. We initialize the model with the parameters pre-trained on training set without the importance map. The model is further trained with the learning rate of , and . In each learning rate, the model is trained until the objective function does not decrease. And a smaller learning rate is adopted to fine-tune the model.

4 Convolutional entropy encoder

Due to no entropy constraint is included, the code generated by the compression system in Sec. 3 is non-optimal in terms of entropy rate. This provides some leeway to further compress the code with lossless entropy coding. Generally, there are two kinds of entropy compression methods, Huffman tree and arithmetic coding [20]. Among them, arithmetic coding can exhibit better compression rate with a well-defined context, and is adopted in this work.

4.1 Encoding binary code

The binary arithmetic coding is applied according to the CABAC [9] framework. Note that CABAC is originally proposed for video compression. Let be the code of binary bitmaps, and be the corresponding importance mask. To encode

, we modify the coding schedule, redefine the context, and use CNN for probability prediction. As to coding schedule, we simply code each binary bit map from left to right and row by row, and skip those bits with the corresponding important mask value of 0.

Figure 3: The CNN for convolutional entropy encoder. The red block represents the bit to predict; dark blocks mean unavailable bits; blue blocks represent available bits.

Context modeling. Denote by be a binary bit of the code . We define the context of as by considering the binary bits both from its neighbourhood and from the neighboring maps. Specifically, is a cuboid. We further divide the bits in into two groups: the available and unavailable ones. The available ones represent those can be used to predict . While the unavailable ones include: (i) the bit to be predicted , (ii) the bits with the importance map value 0, (iii) the bits out of boundary and (iv) the bits currently not coded due to the coding order. Here we redefine by: (1) assigning 0 to the unavailable bits, (2) assigning 1 to the unavailable bits with value 0, and assigning 2 to the unavailable bits with value 1.

Probability prediction. One usual method for probability prediction is to build and maintain a frequency table. As to our task, the size of the cuboid is too large to build the frequency table. Instead, we introduce a CNN model for probability prediction. As shown in Figure 3, the convolutional entropy encoder takes the cuboid as input, and output the probability that the bit is 1. Thus, the loss for learning the entropy encoder can be written as,

(12)

where is the importance mask. The convolutional entropy encoder is trained using the ADAM solver on the contexts of binary codes extracted from the binary feature maps generated by the trained encoder. The learning rate decreases from to as we do in Sec. 3.

4.2 Encoding quantized importance map

We also extend the convolutional entropy encoder to the quantized importance map. To utilize binary arithmetic coding, a number of binary code maps are adopted to represent the quantized importance map. The convolutional entropy encoder is then trained to compress the binary code maps.

5 Experiments

Figure 4: Comparison of the ratio-distortion curves by different methods: (a) PSNR, (b) SSIM, and (c) MSE. ”Without IM” represents the proposed method without importance map.
Figure 5: Images produced by different compression systems at different compression rates. From the left to right: groundtruth, JPEG, JPEG 2000, Ballé [1], and ours. Our model achieves the best visual quality at each rate, demonstrating the superiority of our model in preserving both sharp edges and detailed textures. (Best viewed on screen in color)

Our content-weighted image compression model are trained on a subset of ImageNet [3] with about high quality images. We crop these images into patches and take use of these patches to train the network. After training, we test our model on the Kodak PhotoCD image dataset with the metrics for lossy image compression. The compression rate of our model is evaluated by the metric bits per pixel (bpp), which is calculated as the total amount of bits used to code the image divided by the number of pixels. The image distortion is evaluated with Means Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and the structural similarity (SSIM) index.

In the following, we first introduce the parameter setting of our compression system. Then both quantitative metrics and visual quality evaluation are provided. Finally, we further analyze the effect of importance map and convolutional entropy encoder on the compression system.

5.1 Parameter setting

In our experiments, we set the number of binary feature maps according to the compression rate, 64 when the compression rate is less than bpp and 128 otherwise. Then, the number of importance level is chosen based on . For and , we set the number of importance level to be and , respectively. Moreover, different values of the tradeoff parameter in the range are chosen to get different compression rates. For the choice of the threshold value , we just set it as for and for . is the wanted compression rate represent with bit per pixel (bpp).

5.2 Quantitative evaluation

For quantitative evaluation, we compare our model with JPEG [18], JPEG 2000 [13], and the CNN-based method by Ballé  [1]. Among the different variants of JPEG, the optimized JPEG with 4:2:0 chroma sub-sampling is adopted. For the sake of fairness, all the results by Ballé [1], JPEG, and JPEG2000 on the Kodak dataset are downloaded from http://www.cns.nyu.edu/~lcv/iclr2017/.

Using MSE, SSIM [19] and PSNR as performance metrics, Figure 4 gives the ratio-distortion curves of these four methods. In terms of MSE, JPEG has the worst performance. And both our system and Ballé [1] can be slightly better than JPEG 2000. In terms of PSNR, the results by JPEG 2000, Ballé [1] and ours are very similar, but are much higher than that by JPEG. In terms of SSIM, our system outperforms all the three competing methods, including JPEG, JPEG 2000, and Ballé [1]. Due to SSIM is more consistent with human visual perception than PSNR and MSE, these results indicate that our system may perform better in terms of visual quality.

5.3 Visual quality evaluation

We further compare the visual quality of the results by JPEG, JPEG 2000, Ballé [1] and our system in low compression rate setting. Figure 5 shows the original images and the results produced by the four compression systems. Visual artifacts, , blurring, ringing, and blocking, usually are inevitable in the compressed images by traditional image compression standards such as JPEG and JPEG 2000. And these artifacts can also be perceived in the second and third columns of Figure 5. Even Ballé [1] is effective in suppressing these visual artifacts. In Figure 5, from the results produced by Ballé [1], we can observe the blurring artifacts in row 1, 2, 3, and 5, the color distortion in row 4 and 5, and the ringing artifacts in row 4 and 5. In contrast, the results produced by our system exhibit much less noticeable artifacts and are visually much more pleasing.

From Figure 5, Ballé [1] usually produces the results by blurring the strong edges or over-smoothing the small-scale textures. Specifically, in row 5 most details of the necklace have been removed by Ballé [1]. One possible explanation may be that before entropy encoding it adopts a spatially invariant bit allocation scheme. Actually, it is natural to see that more bits should be allocated to the regions with strong edges or detailed textures while less to the smooth regions. Instead, in our system, an importance map is introduced to guide spatially variant bit allocation. Moreover, instead of handcrafted engineering, the importance map are end-to-end learned to minimize the rate-distortion loss. As a result, our model is very promising in keeping perceptual structures, such as sharp edges and detailed textures.

5.4 Experimental analyses on important map

Figure 6: The important maps obtained at different compression rates. The right color bar shows the palette on the number of bits.

To assess the role of importance map, we train a baseline model by removing the importance map network from our framework. Both entropy and importance map based rate loss are not included in the baseline model. Thus, the compression rate is controlled by modifying the number of binary feature maps. Figure 4 also provides the ratio-distortion curves of the baseline model. One can see that, the baseline model performs poorer than JPEG 2000 and Ballé [1] in terms of MSE, PSNR, and SSIM, validating the necessity of importance map for our model. Using the image in row 5 of Figure 5, the compressed images by our model with and without importance map are also shown in the supplementary material. And more detailed textures and better visual quality can be obtained by using the importance map.

Figure 6 shows the importance map obtained at different compression rates. One can see that, when the compression rate is low, due to the overall bit length is very limited, the importance map only allocates more bits to salient edges. With the increasing of compression rate, more bits will begin to be allocated to weak edges and mid-scale textures. Finally, when the compression rate is high, small-scale textures will also be allocated with more bits. Thus, the importance map learned in our system is consistent with human visual perception, which may also explain the advantages of our model in preserving the structure, edges and textures.

5.5 Entropy encoder evaluation

Figure 7: Performance of convolutional entropy encoder: (a) for encoding binary codes and importance map, and (b) by comparing with CABAC.

The model in Sec. 3 does not consider entropy rate, allowing us to further compress the code with convolutional entropy encoder. Here, two groups of experiments are conducted. First, we compare four variants of our model: (i) the full model, (ii) the model without entropy coding, (iii) the model by only encoding binary codes, and (iv) the model by only encoding importance map. From Figure 7red(a), both the binary codes and importance map can be further compressed by using our convolutional entropy encoder. And our full model can achieve the best performance among the four variants. Second, we compare our convolutional entropy coding with the standard CABAC with small context (the 5 bits near the bit to encode). As shown in Figure 7red(b), our convolutional entropy encoder can take larger context into account and perform better than CABAC. Besides, we also note that our method with either CABAC or convolutional encoder can outperform JPEG 2000 in terms of SSIM.

6 Conclusion

A CNN-based system is developed for content weighted image compression. With the importance map, we suggest a non-entropy based loss for rate control. Spatially variant bit allocation is also allowed to emphasize the salient regions. Using the straight-through estimator, our model can be end-to-end learned on a training set. A convolutional entropy encoder is introduced to further compress the binary codes and the importance map. Experiments clearly show the superiority of our model in retaining structures and removing artifacts, leading to remarkable visual quality.

References

Appendix A Network Architecture

Layer Activation size
Input

conv, pad

, stride
Residual block, 128 filters
conv, pad , stride
Residual block, 256 filters
Residual block, 256 filters
conv, pad , stride
Table 1: Network architecture of the convolutional encoder.
Layer Activation size
Input
conv, pad , stride
Residual block, 512 filters
Residual block, 512 filters
Depth to Space, stride 2
conv, pad , stride
Residual block, 256 filters
Depth to Space, stride 4
conv, pad , stride
conv, pad , stride
Table 2: Network architecture of the convolutional encoder.

Table 1 and Table 2

give the network architectures of the convolutional encoder and decoder, respectively. Except for the last layer, each convolutional layer is followed by ReLU nonlinearity. For the encoder, the last convolutional layer is followed by a Sigmoid nonlinearity to make sure the output of the encoder is the interval of

. As to the decoder, there is no nonlinear layer after the last convolutional layer. For the residual block, we stack two convolutional layers in each block and remove the batch normalization layers. The architecture of the residual blocks is shown in Figure 8.

Figure 8: Structure of the residual blocks.

Appendix B Binarizing scheme for importance map

The importance map is a part of our codes. In order to compress the importance map with binary arithmetic coding method, we should first binarize the importance map. In this work, we simply adopt the binary representation of the quantized importance map to generate the binary importance map with the shape of . Here, is the number of feature maps in the binary importance map and it satisfies . is the importance levels. Given the importance map , the binary importance map follows the equation below.

(13)

With Eqn. 13, the binary importance map can be easily calculated from the quantized importance map.

Appendix C Experiments supplementary

Figure 9: Comparison between our model with and without importance map.

The compressed images by our model with and without importance map are also shown in Figure 9. And more detailed textures and better visual quality can be obtained by using the importance map. This indicates that the introduced importance map provides our model with more ability to model the textures and edges in low bit rate image compression.

More high resolution results can be found at http://www2.comp.polyu.edu.hk/~15903062r/content-weighted-image-compression.html. And a large vision of our paper with more experiment results in the appendix is also available at this site.