Learning based Facial Image Compression with Semantic Fidelity Metric

12/25/2018 ∙ by Zhibo Chen, et al. ∙ USTC 12

Surveillance and security scenarios usually require high efficient facial image compression scheme for face recognition and identification. While either traditional general image codecs or special facial image compression schemes only heuristically refine codec separately according to face verification accuracy metric. We propose a Learning based Facial Image Compression (LFIC) framework with a novel Regionally Adaptive Pooling (RAP) module whose parameters can be automatically optimized according to gradient feedback from an integrated hybrid semantic fidelity metric, including a successfully exploration to apply Generative Adversarial Network (GAN) as metric directly in image compression scheme. The experimental results verify the framework's efficiency by demonstrating performance improvement of 71.41 52.67 codecs under the same face verification accuracy distortion metric. We also evaluate LFIC's superior performance gain compared with latest specific facial image codecs. Visual experiments also show some interesting insight on how LFIC can automatically capture the information in critical areas based on semantic distortion metrics for optimized compression, which is quite different from the heuristic way of optimization in traditional image compression algorithms.



There are no comments yet.


page 8

page 15

page 16

page 19

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face verification/recognition has been developing rapidly in recent years, which facilitates a wide range of intelligent applications such as surveillance video analysis, mobile authentication, etc. Since these frequently-used applications generate a huge amount of data that requires to be transmitted or stored, a highly efficient facial image compression is broadly required as illustrated in Fig. 1.

Basically, facial image compression can be regarded as a special application of general image compression technology. While evolution of general image/video compression techniques has been focused on continuously improving Rate Distortion performance, viz. reducing the compressed bit rate under the same distortion between the reconstructed pixels and original pixels or reducing the distortion under the same bit rate. The apparent question is how to define the distortion metric, especially for specific application scenario such as face recognition in surveillance. Usually we can classify the distortion into three levels of distortion metrics:

Pixel Fidelity, Perceptual Fidelity, and Semantic Fidelity, according to different levels of human cognition on image/video signals.

Figure 1: A highly efficient facial image compression is broadly required (located in blue arrow) in a wide range of intelligent applications.

The most common metric is Pixel Fidelity, which measures the pixel level difference between the original image and compressed image, e.g., MSE (Mean Square Error) has been widely adopted in many existed image and video coding techniques and standards (e.g., MPEG-2, H.264, HEVC, etc.). It can be easily integrated into image/video hybrid compression framework as an in-loop metric for rate-distortion optimized compression. However, it’s obvious that pixel fidelity metric cannot fully reflect human perceptual viewing experience Wan and Bovik (2009). Therefore, many researchers have developed Perceptual Fidelity metrics to investigate objective metrics measuring human subjective viewing experience Chen et al. (2016). With the development of aforementioned intelligent applications, image/video signals will be captured and processed for semantic analysis. Consequently, there will be increasingly more requirements on research for Semantic Fidelity metric to study the semantic difference (e.g., difference of verification accuracy) between the original image and compressed image. There are few research work on this area Chopra et al. (2005); Zhang et al. (2015) and usually are task-specific.

The aforementioned various distortion metrics provide a criteria to measure the quality of reconstructed content. However, the ultimate target of image quality assessment is not only to measure the quality of images with different level of distortion, but also to apply these metrics to optimize image compression schemes. While it’s a contradictory that most complicated quality metrics with high performance are not able to be integrated easily into an image compression loop. Some research works tried to do this by adjusting image compression parameters (e.g., Quantization parameters) heuristically according to embedded quality metrics Alakuijala et al. (2017); Liu et al. (2017), but they are still not fully automatic-optimized end-to-end image encoder with integration of complicated distortion metrics.

Figure 4: The visualization of gradient feedback from (a) MSE; (b) the integrated face verification metric, which shows that more focus is on the distinguishable regions (e.g., eye, nose) according to such semantic distortion metric.

In this paper, we are trying to solve this problem by developing a Learning based Facial Image Compression (LFIC) framework, to make it feasible to automatically optimize coding parameters according to gradient feedback from the integrated hybrid facial image distortion metric calculation module. Different from traditional hybrid coding framework with prediction, transform, quantization and entropy coding modules, we separate these different modules inside or outside the end-to-end loop according to their derivable property. We demonstrated the efficiency of this framework with the simplest prediction, quantization and entropy coding module. We propose a new module called Regionally Adaptive Pooling (RAP) inside the end-to-end loop to improve the ability to configure compression performance. RAP has advantages of being able to control bit allocation according to distortion metrics’ feedback under a given bit budget. Face verification accuracy is adopted as one semantic distortion metric to be integrated into LFIC framework.

Although we adopt the simplest prediction, quantization and entropy coding module, the LFIC framework has shown great improvement over traditional codec like JPEG2000, WebP and neural network-based codecs, and also demonstrates much better performance compared with existing specific facial image compression schemes. The visualization as illustrated in Fig. 4 shows that more focus is on the distinguishable regions (e.g., eye, nose) according to face verification metric. Also, it demonstrates that our LFIC framework can automatically capture the information in critical areas based on semantic distortion metric.

In general, our contributions are four-folds: 1) a Learning based Facial Image Compression framework; 2) a novel pooling strategy called RAP; 3) a successful exploration to apply Generative Adversarial Network (GAN) as metric to compression directly; 4) a starting exploration for semantic based image compression.

2 Related Work

2.1 Image Compression

For image compression, standard image codecs such as JPEG, JPEG2000 and WebP have been widely used , which have made remarkable achievements in general applications over the past few decades. However, such compression schemes are becoming increasingly difficult to meet the needs for advanced applications of semantic analysis. There are some preliminary heuristic explorations such as Alakuijala et al. adopted the distortion metric from the perspective of perceptual-level to analogically optimize the JPEG encoder Alakuijala et al. (2017). And Prakash et al. enhanced JPEG encoder by highlighting semantically-salient regions Prakash et al. (2017).

There are also some face-specific image compression schemes proposed in academic area, some attempts Elad et al. (2007); Bryt and Elad (2008); Ram et al. (2014); Ferdowsi et al. (2015) have been made to design dictionary-based coding schemes on this specific image type. Moreover, face verification in compressed domain is another solution due to its lower computational complexityDelac et al. (2008).

Recently image compression with neural network attracts increasing interest recently. Ballé et al. optimized a model consisting of nonlinear transformations for a perceptual metric Ballé et al. (2016) and MSE Ballé et al. (2017), also relaxed the discontinuous quantization with additive uniform noise with the goal of end-to-end training. Theis et al. Theis et al. (2017)

used a similar architecture but dealt with quantization and entropy rate estimation in a different way. Consistent with the architecture of

Theis et al. (2017), Agustsson et al. Agustsson et al. (2017) trained with a soft-to-hard entropy minimization scheme in the context of model and feature compression. Dumas et al. Dumas et al. (2017) introduced a competition mechanism between image patches binding to sparse representation. Li et al. Li et al. (2018) achieved spatially bit-allocation for image compression by introducing an importance map. Jiang et al. Jiang et al. (2017)

realized a super-resolution-based image compression algorithm. As variable rate encoding is a fundamental requirement for compression, some efforts

Toderici et al. (2016, 2017); Johnston et al. (2018)

have been devoted towards using autoencoders in a progressive manner, growing with the number of recurrent iterations. On the basis of these progressive autoencoders, Baig

et al. introduced an inpainting scheme that exploits spatial coherence to reduce redundancy in image Baig et al. (2017). Chen et al. proposed an end-to-end framework for video compression Chen et al. (2018). With the rapid development of GANs, it has been proved that it is possible to adversarially generate images from a compact representation Santurkar et al. (2017); Rippel and Bourdev (2017). In the last few months, the modeling of latent representation becomes an emerging direction Ballé et al. (2018); Mentzer et al. (2018)

. Typically, they learned a probability model of the latent distribution to improve the efficiency of entropy coding.

However, most of aforementioned works either employed pixel fidelity and perceptual fidelity metrics, or optimized by heuristically adjusting codec’s parameters. Instead, our framework is a neural network based scheme able to automatically optimize coding parameters with integrated hybrid distortion metrics, which demonstrates much higher performance improvement compared with these state of the art solutions.

2.2 Adaptive Pooling

Traditional block based pooling strategy applied in neural network based scheme is not suitable for integrated semantic metrics, since most semantic metrics are not block-wise, e.g. face verification accuracy metric is to define the verification accuracy of the whole facial image rather than to define the accuracy of each block in the facial image. Therefore we need to propose a new pooling operation able to deal with this issue.

The idea of spatial pooling is to produce informative statistics in a specific spatial area. In consideration of relatively fixed pattern it has, several works aimed at enhancing its flexibility. Some approaches adaptively learned regions that distinguishable for classification Jia et al. (2012); He et al. (2014). Similar to Jia et al., some works tried to design better spatial regions for pooling to reduce the effect of background noise with the goal of image classification Liu et al. (2016); Wang et al. (2016) and object detection Tsai et al. (2015). As traditional pooling operation adopt a fixed block size for each image, we propose a variable block size pooling scheme named RAP, which is configurable optimized on the basis of integrated distortion metrics and provide ability of preserving higher quality to crucial local areas.

Figure 5: The proposed Learning based Facial Image Compression (LFIC) Framework.

3 Learning based Facial Image Compression Framework

This section introduces the general framework of facial image compression with integrated general distortion metric, as illuminated in Fig. 5.

Compression Flow. Consistent with conventional codec, our LFIC framework contains a compression flow and a decompression flow. In the compression flow, an image is fed into a differentiable encoder and a quantizer , translated into a compact representation : , where subscripts refer to parameters (the same in the remaining of this section). The quantizer can attain a significant amount of data reduction, but still statistically redundant. Therefore, we further perform several generic or specific lossless compression schemes (i.e., transformation, prediction, entropy coding), formulated as , to achieve higher coding efficiency. After the lossless compression, is encoded into a bitstream that can be directly delivered to a storage device, or a dedicated link, etc.

Decompression Flow. In the decompression flow, due to the reversibility of lossless compression, can be recovered from the channel by . The reconstructed image is ultimately obtained by a differentiable decoder : , where .

Metric Transformation Flow. As mentioned before, a general distortion metric calculation module is integrated into our LFIC framework. This motivates the use of a transformation , that bridge the gap between pixel domain and metric domain. We expect that, the difference between and , which are generated from and respectively, represents distortion measured in our desired metric domain (i.e., pixel fidelity domain, perceptual fidelity domain, and semantic fidelity domain). After that, the loss can be propagated back to each component of the compression-decompression flow (i.e. , and ) which needs to be differentiable.

Gradients Flow. Since and are both differentiable, therefore, the only inherently non-differentiable step here is quantization, which poses an undesirable obstacle for end-to-end optimization with gradient-based techniques. Some effective algorithms have been developed to tackle this challenging problem Ballé et al. (2016); Toderici et al. (2016). We follow the works in Theis et al. (2017)

, which regards quantization as rounding, by replacing its derivative in backpropagation:


where is a smooth approximation, and square bracket depicts rounding a real number to the nearest integer value. We set here, which means perform backpropagation without modification through rounding. In general, the gradient of loss with respect to input image can be formulated as:


In a word, during the entire pipeline of our framework, we separate distinct modules inside or outside the end-to-end loop according to their differentiable property. The modules like , , and are placed inside the loop, the parameters in these modules can be updated according to the gradient back-propagation from the loss measured by the distortion metric. The modules like and are placed outside the loop, since it is non-differentiable and reversible.

4 Semantic-oriented Facial Image Compression

As described in Sec. 3, compared with ordinary compression pipeline, the main advantage of semantic-oriented compression is that we can automatically optimize parameters in encoder , quantizer and decoder according to the semantic distortion metric supported by . It is significant that, we can preserve semantic features while reducing redundancy in a full-automatic way rather than heuristically tuning coding parameters according to distortion metrics, which is very important for future intelligent media applications with necessary of transmitting images compressed by preserving semantic fidelity.

Figure 6: Illustration of the facial image compression scheme adopted in this paper. The notations are consistent with Fig. 5.

As mentioned in the introduction section, the proposed semantic-oriented facial image compression scheme incorporates the proposed pooling strategy, RAP (Regionally Adaptive Pooling), into the network, which is a differentiable and lossy operation that can use variable block size pooling for each image.

After RAP, We then implement a simple prediction as illustrated below:


where denote coordinates of pixels. The output of prediction is followed by an arithmetic coding. As we showed previously, we merge a transformation to send back error measured in semantic domain, which will be illustrated with more details in next section.

4.1 Encoding with RAP

Pooling layer is commonly employed to down-sample an input representation immediately after convolutional layer in neural networks, on the assumption that features are contained in the sub-regions. In most of the widely used neural network structures, the pooling blocks are not overlapped and fixed block size is used in each image. However, in the context of image compression, such fixed block size pooling scheme is improper in the case of heterogeneous texture distribution. In order to address this issue and increase the flexibility, we propose RAP of using variable block sizes pooling scheme. The choice of block sizes used in each sub-region are represented as a mask.

Suppose an input image , where denote the height, width, channel of respectively. The output of non-overlapping pooling operation with fixed block size can be denoted as , where

. Then we interpolate

to , where , and concatenate along the last dimension:


where , and indicate the maximum block size. We define a mask , the output of RAP can be formulated as:


where denote indexes of height, width, channel respectively.

1:procedure Updating of at encoding time
6:     top:
7:     if  or  then return false      
8:     loop:
11:     for (i,j) in {(i,j)} do
14:     count = count + 1
15:     goto top.
Algorithm 1 Updating Scheme of

In the training stage, the mask is random initialized to facilitate a robust learning process of neural networks. In the testing stage, at encoding time, is adaptively determined by: 1) a given bit rate budget; 2) the gradient feedback from the integrated semantic distortion metrics. We first initialize as:


then automatically update according to Alg. 1. In practice, we first set a constraint on the mask according to bit rate budget, then adjust the mask based on gradient feedback. For example, smaller block size will be used in the location determined by gradient feedback if the bit rate budget is adequate. We encode with arithmetic coding as overhead (around 5%-10% of the total bit rate).

At decoding time, since the mask (transmitted as overhead) is available, we can completely restore the compact representation after arithmetic decoding. Finally, the reconstructed image is obtained by the decoder .

Different from autoencoder-based compression, RAP provide the ability of preserving crucial local features that have great impact on face verification (e.g.

, the regions around eyes have larger gradient so that these regions should be pooled with smaller block size), which adds support of spatially adaptive bit allocation to our LFIC framework. We also believe that RAP has the potential to be embedded in the widely used convolutional neural network structure to provide the strong flexibility. The experimental results demonstrate that RAP served as a promising encoder component, and the restored faces retain the semantic property very well.

4.2 Decoding with Adversarial Networks

As a fast-growing architecture in the field of neural network, GANs achieve impressive success in lots of tasks. We apply such generative model to compression directly to reduce reconstruction error. We employ a discriminator training with decoder simultaneously, to force the decoded images to be indistinguishable from real images and to make the reconstruction process well constrained by incorporating prior knowledge of face distribution. Since standard procedures of GAN usually result in mode collapse and unstable training Mao et al. (2017)

, therefore, we adopt the least square loss function in LSGAN

Mao et al. (2017), which yields minimizing the Pearson divergence instead of Jensen-Shannon divergence Goodfellow et al. (2014). Our adversarial loss can be defined as follows:


Adversarial losses can urge the reconstructed data distributed as the original one in theory. However, a network with large enough capability can learn any mapping functions between these two distributions, that cannot guarantee the learned mapping producing desired reconstructed images. Therefore, a constraint to mapping function is needed to reduce the space of mapping functions. This issue calls for the employment of pixel-wise L1 loss for content consistency:

Layer RBs Input Filter Size / BN Activation Output
1 1 input / 2 Y ReLU conv1
2 2 conv1 / 2 Y ReLU conv2
3 2 conv2 / 2 Y ReLU conv3
4 3 conv3 / 2 Y ReLU conv4
5 2 conv4 / 2 Y ReLU deconv5
6 2 deconv5, conv3 / 2 Y ReLU deconv6
7 1 deconv6, conv2 / 2 Y ReLU deconv7
8 - deconv7, conv1 / 2 Y ReLU deconv8
9 1 deconv8, input / 1 Y ReLU conv9
10 1 conv9 / 1 Y ReLU conv10
11 - conv10 / 1 N Tanh conv11
Table 1: Details of our decoder architecture. Each convolutional or deconvolutional layers are optionally followed by several RBs (Residual Blocks).
Layer Type Input Filter Size / BN Activation Output
1 Conv input 33 / 1 Y ReLU conv1
2 Conv conv1 33 / 1 N ReLU conv2
3 Add input, conv2 - - - output
Table 2: Detailed architecture of RB (Residual Block)

Decoder Architecture. Previous works He et al. (2016) have shown that residual learning have the potential to train a very deep convolutional neural network. We employ several Convolution-BatchNorm-ReLU modules Ioffe and Szegedy (2015) and residual modules based on symmetric skip-connection architecture for the decoder, which allowing connection between a convolutional layer to its mirrored deconvolutional layer (Tab. 1). Any extra inputs are specified in the input

column. Such design mix the information of different features extracted from various layers, and prevent training from suffering from gradient vanishing. For the discriminator network, we follow DCGAN

Radford et al. (2016) except for the least square loss function.

4.3 Training with Semantic Distortion Metric

Our main goal is to obtain a compact representation, and ideally, such representation is expressive enough to rebuild data under semantic distortion metric. As we have shown previously, each in-loop operation in our framework is differentiable to guarantee that the error will be propagated back.

As to facial compression, we select FaceNet Schroff et al. (2015) as the metric transformation , a neural-network-based tool that maps face images to a compact Euclidean space. Such space amplifies distances of faces from distinct people, while reduce distances of faces from the same person. This model is pre-trained with triplet loss and center loss Wen et al. (2016). Specifically, we adopt L2 loss to facilitate semantic preserving on encoder and decoder:


4.4 Full Objective

The overall objective is a weighted sum of three individual objectives:


In practice, we also attempted a regularization term and replace L1 norm with MSE, but did not observe obvious performance improvement.

5 Experiments

In this section, we will introduce the dataset and the specific experimental details for facial compression. We compared our proposed method with traditional codecs and specific facial image compression methods. The results demonstrate that our method not only produce more visually preferred images under very low bit rate, but also good at preserving semantic information in the context of face verification scenario.

Dataset. We used the publicly available CelebA aligned Liu et al. (2015) dataset to train our model. CelebA contains 10,177 number of identities and 202,599 number of facial images. We eliminated the faces that cannot be detected by dlib 111http://dlib.net/, and that are judged to be profiles based on landmarks annotations. The remaining images were cropped to , and randomly divided into training set (100,000 images, 9014 identities) and testing set (14,871 images, 1870 identities).

Evaluation. We adopt the accuracy (10-fold cross-validation) of face verification to represent the ability of semantic preservation, which means that lower verification accuracy represents higher semantic distortion during image compression progress. Face verification is a binary classification task that given a pair of images, to determine whether the two pictures represent the same individual. We randomly generate 6000 pairs in testing set for face verification, where the positive and negative samples are half to half. The bitstream was represented as Bit-Per-Pixel (BPP). The Peak Signal-to-Noise Ratio (PSNR) was calculated in RGB channel. Furthermore, in order to calculate the equivalent rate distortion difference between two compression schemes, we refer Bjontegaard (2001), which is widely used in international image/video compression standard. The only difference in implementation is that we replace BPS (Bits Per Second) by BPP as rate index, and replace PSNR by face verification accuracy as distortion index.

Pre-train for metric transformation. In our task, the metric transformation plays an important role in bridging pixel domain and semantic domain, where the distortion is measured to provide gradient feedback. We employed FaceNet Schroff et al. (2015), a learned mapping that translating facial images to a compact Euclidean space, where the distance representing facial similarity. We use the parameters pre-trained on MS-Celeb-1M dataset 222https://github.com/davidsandberg/facenet, with a competitive classification accuracy of 99.3% on LFW Huang et al. (2007) and 97.5% on our generated pairs. In the training stage of compression network, the parameters of metric transformation will be fixed to ensure the reliability of semantic distortion measurement.

Implementation details.

We implement all modules using TensorFlow, with training executed on NVIDIA Tesla K80 GPUs. We employ Adam optimization

Kingma and Ba (2015) to train our network with learning rate 0.0001. All parameters are trained with 20000 iterations (64 images / iteration), which cost about 24 GPU-hours. We heuristically set , , in the experiments. The principle is to increase the weight of adversarial and semantic parts as much as possible, while avoiding disturbance to subjective quality of reconstructions. We adopt the block size of 4 or 8 as an instance to demonstrate the effectiveness of our scheme. We have also conducted our experiments with different block sizes (e.g., 16 or 32), but it doesn’t bring much rate distortion performance gain compared with current settings.

Figure 7: Qualitative results of our model (RAP) compared to JPEG2000, WebP. Each value is averaged over testing set. ACC means the accuracy of face verification. Our results are obtained by training with 24 and 8 quantization levels respectively. The WebP codec can’t compress images to a bit rate lower than 0.193 BPP. It is worth noting that as demonstrated in the first two columns, our scheme tend to learn the key facial structure instead of color of the girl’s hair and the elder man’s cloth.

Figure 8: Detailed comparison between RAP and RFP. The first and the fourth columns are decoded images from RFP at about BPP, while the third and the sixth columns are decoded images from RAP at about BPP. Obviously, RAP demonstrates much better performance in preserving distinguishable details than RFP.

5.1 Comparison against Typical Image Compression Schemes

We compare our model against typical widely used image compression codecs (JPEG2000 333http://www.openjpeg.org/ (v2.1.2), WebP 444https://developers.google.com/speed/webp/ (v0.6.0-rc3)), a neural network-based image compression method of Toderici et al. 555https://github.com/tensorflow/models/tree/master/research/compression Toderici et al. (2017) and specific facial image compression methods Elad et al. (2007); Bryt and Elad (2008); Ram et al. (2014); Ferdowsi et al. (2015).

We refer Bjontegaard (2001) to calculate the equivalent rate distortion difference between two compression schemes. We replaced BPS by BPP, which is averaged over testing set. We also replaced PSNR by face verification accuracy, which is described in aforementioned section. The results outperform JPEG2000 and WebP codecs, as well as Toderici’s solution significantly, as shown in Table.3.

Some considered efforts have been made to specific facial image compression Elad et al. (2007); Bryt and Elad (2008); Ram et al. (2014); Ferdowsi et al. (2015). But instead of automatically optimizing with integrated hybrid metrics (e.g., semantic fidelity), they adjusted compression parameters heuristically (e.g., bit allocation) and evaluated their performance of methods on gray-scale images with PSNR/SSIM only (34.20% 47.18% bit rate reduction over JPEG2000 as Tab. 4 demonstrates 666A fixed header size of 100 bytes in JPEG2000 is added for all results., these results are extracted from their papers since the authors don’t release their source code for comparison).

AnchorTest Ours
JPEG2000 -71.41%
WebP -48.28%
Toderici et al. Toderici et al. (2017) -52.67%
Table 3: Ratio of Bit Rate Saving of our scheme compared with benchmarks
TestAnchor JPEG2000
Elad et al. Elad et al. (2007) -47.18%
Bryt et al. Bryt and Elad (2008) -45.55%
Ram et al. Ram et al. (2014) -34.20%
Ferdowsi et al. Ferdowsi et al. (2015) -35.22%
Ours -71.41%
Table 4: Comparison with specific facial image compression methods on ratio of Bit Rate Saving relative to JPEG2000

As mentioned in introduction section, pixel fidelity cannot fully reflect semantic difference. For instance, in Fig. 7, we can observe that RAP at 0.110 BPP has a much higher face verification rate and better visual experience than JPEG2000 and WebP at 0.193 BPP, even though RAP has lower PSNR/SSIM in this case. On the other hand, Delac et al. Delac et al. (2008) found out that most traditional compression algorithms have face verification rate dropped significantly under the bit rate range of 0.2 0.6 BPP. In contrast, our scheme can keep face verification accuracy without significantly deterioration even under the very low bit rate of 0.05 BPP.

We calculate the time consuming of our scheme and traditional codecs on the same machine (CPU: i7-4790K, GPU: NVIDIA GTX 1080). The overall computational complexity of our implementation is about times that of WebP. It should be noted that our scheme is just a preliminary exploration of learning-based framework for image compression and each part is implemented without any optimization.

Figure 9: Rate Distortion performance analysis. Cubic spline interpolation is used for fitting curves from discrete points.

5.2 Rate Distortion Performance Analysis

To evaluate the effectiveness of spatially adaptively bit allocation, we compared RAP with its non-adaptive counterpart, namely, Regionally Fixed Pooling (RFP), whose block sizes are all fixed. Acting in this way, RFP could not adjust block sizes to achieve variable rate at testing time as RAP does. Therefore, we trained RFP model with different quantization step to realize variable rate for comparison. As Fig. 9 illustrates, with the increase of bit budget, the performance of RAP is much higher than RFP. We also provide detailed comparison in Fig. 8 which demonstrates that RAP can automatically preserving better quality than RFP on the distinguishable regions under the same bit rate.

We also analyze the influence of adversarial loss and semantic loss by comparing the performance of RAP/RFP, RAP/RFP without adversarial loss (RFP w/o GAN) and RAP/RFP without semantic loss (RFP w/o Sem). We observed that both of these losses make contributions to our delightful results, and the semantic loss shows much higher influence than adversarial loss. Note that RAP without is worse than RFP due to its failure to allocate more bits on distinguishable regions as illustrated in Fig. 4.

6 Conclusion

We introduce a LFIC framework integrated with the proposed Region Adaptive Pooling module and a general semantic distortion metric calculation module for task-driven facial image compression. The LFIC enables the image encoder to automatically optimize codec configuration according to integrated semantic distortion metric in an end-to-end optimization manner. Comprehensive experiment has been done to demonstrate the superior performance on our proposed framework compare with some typical image codecs and specific image codecs for facial image compression. We expect to refine prediction and entropy coding modules to further improve compression performance and apply the framework to more general scenarios in future work.

7 Acknowledgement

This work was supported in part by the National Key Research and Development Program of China under Grant No. 2016YFC0801001, the National Program on Key Basic Research Projects (973 Program) under Grant 2015CB351803, NSFC under Grant 61571413, 61632001, 61390514.

Appendix A More Experiments

Figure 10: The first row is uncompressed images. From the second row to the bottom, each row represents the decoded images from JPEG2000 ( BPP), WebP ( BPP), Toderici et al. ( BPP) and RAP ( BPP) respectively.

Figure 11: The first row is uncompressed images. From the second row to the bottom, each row represents the decoded images from JPEG2000 ( BPP), Toderici et al. ( BPP) and RAP ( BPP) respectively. The WebP codec can’t compress images to a bit rate lower than BPP.

We give more qualitative results in Figure 10 and Figure 11. Note that, as explained in paper, several specific facial image compression algorithms evaluated their performance using PSNR/SSIM only, and these algorithms’ performance are ranged in % % bit rate reduction over JPEG2000, which are extracted from their published papers since the authors don’t want to open their code for comparison. On the other hand, for most of specific facial compression algorithms, verification rate drops significantly under BPP. In contrast, our scheme does not deteriorate significantly even under BPP. We also compared our results with Toderici et al. 777https://github.com/tensorflow/models/tree/master/research/compression without entropy coding, since they didn’t release their trained model for entropy coding.

To achieve this results, we integrate different metrics: adversarial loss, L1 loss and semantic loss. We leverage semantic metric to retain identity, while the use of content loss here is to constrain mapping between pixel space and semantic space. Accordingly, the weight of each metrics is a trade-off in this scheme and we heuristically adjust this hyperparameters at present. The principle is to increase the weight of semantic part as much as possible, while avoiding disturbance to subjective quality of reconstructions.


  • Wan and Bovik (2009) Z. Wan, A. Bovik, Mean squared error: Love it or leave it?, IEEE Signal Processing Magazine (2009) 98–117.
  • Chen et al. (2016) Z. Chen, N. Liao, X. Gu, F. Wu, G. Shi, Hybrid distortion ranking tuned bitstream-layer video quality assessment, IEEE Transactions on Circuits and Systems for Video Technology 26 (2016) 1029–1043.
  • Chopra et al. (2005) S. Chopra, R. Hadsell, Y. LeCun, Learning a similarity metric discriminatively, with application to face verification,

    in: Computer Vision and Pattern Recognition (CVPR), volume 1, IEEE, 2005, pp. 539–546.

  • Zhang et al. (2015) P. Zhang, W. Zhou, L. Wu, H. Li, Som: Semantic obviousness metric for image quality assessment, in: Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2394–2402.
  • Alakuijala et al. (2017) J. Alakuijala, R. Obryk, O. Stoliarchuk, Z. Szabadka, L. Vandevenne, J. Wassenberg, Guetzli: Perceptually guided jpeg encoder, arXiv preprint arXiv:1703.04421 (2017).
  • Liu et al. (2017) D. Liu, D. Wang, H. Li, Recognizable or not: Towards image semantic quality assessment for compression, Sensing and Imaging 18 (2017) 1.
  • Prakash et al. (2017) A. Prakash, N. Moran, S. Garber, A. DiLillo, J. Storer, Semantic perceptual image compression using deep convolution networks, in: Data Compression Conference (DCC), 2017, IEEE, 2017, pp. 250–259.
  • Elad et al. (2007) M. Elad, R. Goldenberg, R. Kimmel, Low bit-rate compression of facial images, IEEE Transactions on Image Processing 16 (2007) 2379–2383.
  • Bryt and Elad (2008) O. Bryt, M. Elad, Compression of facial images using the k-svd algorithm, Journal of Visual Communication and Image Representation 19 (2008) 270–282.
  • Ram et al. (2014) I. Ram, I. Cohen, M. Elad, Facial image compression using patch-ordering-based adaptive wavelet transform, IEEE Signal Processing Letters 21 (2014) 1270–1274.
  • Ferdowsi et al. (2015) S. Ferdowsi, S. Voloshynovskiy, D. Kostadinov, Sparse multi-layer image approximation: Facial image compression, arXiv preprint arXiv:1506.03998 (2015).
  • Delac et al. (2008) K. Delac, S. Grgic, M. Grgic, Image compression in face recognition-a literature survey, in: Recent Advances in Face Recognition, InTech, 2008.
  • Ballé et al. (2016) J. Ballé, V. Laparra, E. P. Simoncelli, End-to-end optimization of nonlinear transform codes for perceptual quality, in: Picture Coding Symposium (PCS), 2016, IEEE, 2016, pp. 1–5.
  • Ballé et al. (2017) J. Ballé, V. Laparra, E. P. Simoncelli, End-to-end optimized image compression, in: International Conference on Learning Representations (ICLR), 2017.
  • Theis et al. (2017) L. Theis, W. Shi, A. Cunningham, F. Huszár, Lossy image compression with compressive autoencoders, in: International Conference on Learning Representations (ICLR), 2017.
  • Agustsson et al. (2017) E. Agustsson, F. Mentzer, M. Tschannen, L. Cavigelli, R. Timofte, L. Benini, L. Van Gool,

    Soft-to-hard vector quantization for end-to-end learned compression of images and neural networks,

    in: Advances In Neural Information Processing Systems (NIPS), 2017.
  • Dumas et al. (2017) T. Dumas, A. Roumy, C. Guillemot, Image compression with stochastic winner-take-all auto-encoder, in: International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
  • Li et al. (2018) M. Li, W. Zuo, S. Gu, D. Zhao, D. Zhang, Learning convolutional networks for content-weighted image compression, in: Computer Vision and Pattern Recognition (CVPR), 2018.
  • Jiang et al. (2017) F. Jiang, W. Tao, S. Liu, J. Ren, X. Guo, D. Zhao, An end-to-end compression framework based on convolutional neural networks, IEEE Transactions on Circuits and Systems for Video Technology (2017).
  • Toderici et al. (2016) G. Toderici, S. M. O’Malley, S. J. Hwang, D. Vincent, D. Minnen, S. Baluja, M. Covell, R. Sukthankar,

    Variable rate image compression with recurrent neural networks,

    in: International Conference on Learning Representations (ICLR), 2016.
  • Toderici et al. (2017) G. Toderici, D. Vincent, N. Johnston, S. Jin Hwang, D. Minnen, J. Shor, M. Covell, Full resolution image compression with recurrent neural networks, in: Computer Vision and Pattern Recognition (CVPR), 2017.
  • Johnston et al. (2018) N. Johnston, D. Vincent, D. Minnen, M. Covell, S. Singh, T. Chinen, S. J. Hwang, J. Shor, G. Toderici, Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks, in: Computer Vision and Pattern Recognition (CVPR), 2018.
  • Baig et al. (2017) M. H. Baig, V. Koltun, L. Torresani, Learning to inpaint for image compression, in: Advances In Neural Information Processing Systems (NIPS), 2017.
  • Chen et al. (2018) Z. Chen, T. He, X. Jin, F. Wu, Learning for video compression, arXiv preprint arXiv:1804.09869 (2018).
  • Santurkar et al. (2017) S. Santurkar, D. Budden, N. Shavit, Generative compression, arXiv preprint arXiv:1703.01467 (2017).
  • Rippel and Bourdev (2017) O. Rippel, L. Bourdev, Real-time adaptive image compression,

    in: International Conference on Machine Learning (ICML), 2017.

  • Ballé et al. (2018) J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston,

    Variational image compression with a scale hyperprior,

    in: International Conference on Learning Representations (ICLR), 2018.
  • Mentzer et al. (2018) F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, L. Van Gool, Conditional probability models for deep image compression, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, 2018, p. 3.
  • Jia et al. (2012) Y. Jia, C. Huang, T. Darrell, Beyond spatial pyramids: Receptive field learning for pooled image features, in: Computer Vision and Pattern Recognition (CVPR), IEEE, 2012, pp. 3370–3377.
  • He et al. (2014) K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, in: European Conference on Computer Vision (ECCV), Springer, 2014, pp. 346–361.
  • Liu et al. (2016) Y. Liu, Y.-M. Zhang, X.-Y. Zhang, C.-L. Liu, Adaptive spatial pooling for image classification, Pattern Recognition 55 (2016) 58–67.
  • Wang et al. (2016) J. Wang, W. Wang, R. Wang, W. Gao, Csps: An adaptive pooling method for image classification, IEEE Transactions on Multimedia 18 (2016) 1000–1010.
  • Tsai et al. (2015) Y.-H. Tsai, O. C. Hamsici, M.-H. Yang, Adaptive region pooling for object detection, in: Computer Vision and Pattern Recognition (CVPR), 2015, pp. 731–739.
  • Mao et al. (2017) X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, S. P. Smolley, Least squares generative adversarial networks, in: International Conference on Computer Vision (ICCV), 2017.
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in Neural Information Processing Systems (NIPS), 2014, pp. 2672–2680.
  • He et al. (2016) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • Ioffe and Szegedy (2015) S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: International Conference on Machine Learning (ICML), 2015, pp. 448–456.
  • Radford et al. (2016) A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks (2016).
  • Schroff et al. (2015) F. Schroff, D. Kalenichenko, J. Philbin, Facenet: A unified embedding for face recognition and clustering, in: Computer Vision and Pattern Recognition (CVPR), 2015, pp. 815–823.
  • Wen et al. (2016) Y. Wen, K. Zhang, Z. Li, Y. Qiao, A discriminative feature learning approach for deep face recognition, in: European Conference on Computer Vision (ECCV), Springer, 2016, pp. 499–515.
  • Liu et al. (2015) Z. Liu, P. Luo, X. Wang, X. Tang, Deep learning face attributes in the wild, in: International Conference on Computer Vision (ICCV), 2015, pp. 3730–3738.
  • Bjontegaard (2001) G. Bjontegaard, Calcuation of average psnr differences between rd-curves, Doc. VCEG-M33 ITU-T Q6/16, Austin, TX, USA, 2-4 April 2001 (2001).
  • Huang et al. (2007) G. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments, Technical Report, Technical Report 07-49, University of Massachusetts, Amherst, 2007.
  • Kingma and Ba (2015) D. Kingma, J. Ba, Adam: A method for stochastic optimization, in: International Conference on Learning Representations (ICLR), 2015.