The goal of designing the optimal image codec is to minimize the distortion between the original image and the reconstructed image subject to the constraint of the bitrate . As the entropy is the lower bound of bitrate , the optimization can be formulated as minimizing , where is the tradeoff factor. Recently many works [4, 18, 14] attempt to develop image compression models based on deep learning architectures. In their approaches, a uniform scalar quantizer (SQ) is commonly applied to the feature maps between the encoder and decoder. As the codewords are distributed in a cubic and the corresponding Voronoi regions induced by SQ are always cubic, SQ cannot achieve the R-D bound 
. Vector quantization (VQ) has the optimal performance, but the complexity is usually high. Trellis coded quantizer (TCQ) is a structured VQ, and it can achieve better performance than SQ with modest computational complexity. It is shown in  that for memoryless uniform sources, a 4 state TCQ can achieve 0.87dB higher SNR than SQ for 4 bitsample.
In this paper, motivated by the superior performance of TCQ over SQ in traditional image coding, we propose to use TCQ to replace the commonly used SQ in a deep learning based image compression model. The soft-to-hard strategy  is applied to allow for back propagation during training. To the best of our knowledge, we are the first to investigate the performance of TCQ in a deep learning based image compression framework. Our implementation allows for batch processing amenable to the mini-batch training in deep learning models, which greatly reduces the training time.
The entropy coding can further reduce the bitrate without impacting the reconstruction performance. One way to apply it in deep learning model is to use offline entropy coding method during testing . This method is not optimized for the bitrate as the network is not explicitly designed to minimize the entropy. In this paper, we adopt the PixelCNN++ 
to model the probability density function on an imageover pixels from all channels as , where the conditional probability only depends on the pixels above and to the left of the pixel in the image. A cross entropy loss is followed to estimate the entropy of the quantized representation to jointly minimize the R-D function.
Our contributions are summarized as follows. We propose to incorporate TCQ into a deep learning based image compression framework. The image compression framework consists of encoder, decoder and entropy estimation subnetworks. They are optimized in an end-to-end manner. We experiment on two commonly used datasets and both show that our model can achieve superior performance at low bit rates. We also compare TCQ and SQ based on the same baseline model and demonstrate the advantage of TCQ.
2 Related Work
There has been a line of research on deep learning based image compression, especially autoencoders with a bottleneck to learn compact representations. The encoder maps the image data to the latent space with reduced dimensionality, and the decoder reconstructs the original image from the latent representation.
2.1 Quantization in DNN
, a binarization layer is designed in the forward pass and the gradients are defined based on a proxy of the binarizer. Balléet. al.  stochastically round the given values by adding noise and use the new continuous function to compute the gradients during the backward pass. Theis et. al.  extend the binarizer in  to integers and use straight-through estimator in the backward pass. In , a soft quantization in both forward and backward passes is proposed. The model needs to learn the centers and change from soft quantization to hard assignments during training by an annealing strategy. In , the authors apply the nearest neighbors to obtain fixed centers, and the soft quantization in  is used during the backward pass.
2.2 Image Compression based on DNN
With the quantizer being differentiable, in order to jointly minimize the bitrate and distortion, we also need to make the entropy differentiable. For example, in [4, 18], the quantizer is added with uniform noise. The density function of this relaxed formulation is continuous and can be used as an approximation of the entropy of the quantized values. In , similar to the soft quantization strategy, a soft entropy is designed by summing up the partial assignments to each center instead of counting. In [14, 11], an entropy coding scheme is trained to learn the dependencies among the symbols in the latent representation by using a context model. These methods allow jointly optimizing the R-D function.
3 Proposed Approach
Our model follows the encoder-decoder framework. Different from the previous works that apply a uniform scalar quantizer (SQ) after the encoder network, we propose to use trellis coded quantizer (TCQ) to enhance the reconstruction performance. The whole framework is trained jointly with our entropy model.
3.1 Encoder and Decoder
Since our goal is to study the gain of TCQ and SQ, we only use a simple encoding and decoding framework. Our encoder network consists of three layers of convolutional layers with a stride of 2 to downsample the input. Each convolutional layer is followed by a ReLU layer. We remove BatchNorm layers as we find removing them gives us better reconstruction performance. We add one more convolutional layer to reduce the channel dimension to a small value e.g. 8 to get a condensed feature representation . A layer is followed to project to continuous values between -1 and 1. Then a quantizer is applied to quantize the feature maps to discrete values. For the decoder network, we use PixelShuffle  layer for upsampling. Inspired by , we adopt two intermediate losses after each upsampling operation to force the network to generate images from low resolution to high resolution progressively as shown in Fig. 1.
3.2 Trellis Coded Quantizer
Forward Pass: Trellis coded quantizer (TCQ) is applied in JPEG2000 
part II. Different from JPEG2000 where the input for TCQ is fixed given an image block, when embedded in deep neural networks, the input for TCQ is updated in each iteration during training. The forward pass for TCQ is similar to the original implementation in. In essence, TCQ aims to find a path with minimum distortion from the start symbol to the last symbol based on the particular diagram structure. Figure 3 shows a trellis structure with 4 states. For bit/symbol, a quantizer with quantization levels is created. These reconstruction points can be obtained by a uniform quantizer. As the last layer of our encoder is a function, we have and . The quantization step is . A reconstruction point () is obtained by . Next all the reconstruction levels are partitioned into four subsets from left to right to form four sub-quantizers. Then different subsets are assigned to different branches of the trellis, so that different paths of the trellis can try different combinations to encode an input sequence. Each node only needs to record the input branch that has the smallest cost. After obtaining the minimum distortion for the last symbol, we trace back to get the optimal path as shown in red in Fig. 3 for instance. With this optimal path, 1 bit is used to indicate which branch to move for next symbol, and the last bits are used to indicate the index of codeword from the corresponding sub-quantizer. Here we call it indexing method I .
Backward Pass: In order to make a quantizer differentiable, the most common way is to use straight-through estimator  where the derivative of the quantizer is set to 1. However, we find that such backward method tends to converge slowly for TCQ. As the TCQ changes the distribution of the input data, this inconsistency may make it hard for the network to update weights in the right direction. Similar to , given reconstruction points (), we use the differentiable soft quantization during the backward pass.
is a hyperparameter to adjust the “softness” of the quantization.
Discussions: One issue for the TCQ implementation is that the time and memory complexity are both proportional to the number of symbols. Previous implementation usually flattens the input block into a sequence. Because pixels in one feature map are more correlated than pixels in other feature maps, we consider each feature map as an input for TCQ. For feature maps with size ( is the batch size for the network, is the number of channels, and are the height and width), we reshape the size as , where is the batch size for TCQ and is the number of symbols in a feature map, which reduces the processing time.
The other issue is that the conventional indexing method I mentioned above brings in randomness for the indices of a feature map as shown in Fig. 3 (a). The reason is that the branch bit depends on the optimal path in trellis structure and it does not carry any relationship among each symbol. From JPEG2000 , we have two union-quantizers and . As pointed in , given a node in the diagram, the codeword that can be chosen is either from or . Therefore, because of the particular structure of the trellis, all bits can be used to represent the indices for the union-quantizer and the same applies to . For example, in Fig. 3, assume we receive the initial state during decoding. Only or sub-quantizer will be chosen for this symbol. As the indices for and are all different, we get the corresponding unique codeword based on the received bits. Then we easily know which sub-quantizer ( or ) is chosen and accordingly the branch number. We call it indexing method II. Fig. 3 (b) gives the indices of a feature map resulting from the indexing method II.
3.3 Entropy Coding Model
The aforementioned autoencoder model is not optimized for entropy coding. We can model the conditional probability distribution of a symbol based on its context. The context should be only related to previous decoded symbols, and not use the later unseen symbols. We employ PixelCNN++  model for the entropy coding model. We replace the last layer of PixelCNN++ model in implementation111https://github.com/pclucas14/pixel-cnn-pp with a softmax function so that a cross entropy loss can be used during training. This loss is viewed as an estimation of entropy for the quantized latent representation. Assume we have bits to encode each symbol and a dimensional feature map , the PixelCNN++ model outputs a probability matrix. Encoding is done row by row and each row orders from left to right. With the probability matrix, we encoder the indices of the feature maps by Adaptive Arithmetic Coding (AAC)222https://github.com/nayuki/Reference-arithmetic-coding
to get the compressed representation. During decoding, for the first forward pass, we input the pre-trained PixelCNN++ model with a tensor with all zeros. This first forward pass gives distributions for entrieswhere is a position in the feature map . Then we decode the indices along the channel dimension by AAC. Based on the received initial states, we recover the symbols at . The following decoding steps are based on the conditional probability
where is a tensor with decoded symbol at location and zeros otherwise. When , . When , . As the decoding proceeds, the remaining zeros will be replaced by the decoded symbols progressively.
We use ADE20K dataset  for training and validation. We test on Kodak PhotoCD image dataset333http://r0k.us/graphics/kodak/ and Tecnick SAMPLING dataset . ADE20K dataset contains 20K training and 2K validation images. Kodak PhotoCD image dataset and Tecnick SAMPLING dataset include 24 512768 images and 100 12001200 images respectively.
4.2 Training Details
We crop each input image by 256
256 during training and test on the whole images. During training, we use a learning rate of 0.0001 at the beginning, and decrease it by a factor 0.4 at epoch 80, 100 and 120. Training is stopped at 140 epochs and we use the model that gives the best validation result for testing. We set the batch size as 18 and run the training on one 12G GTX TITAN GPU with the Adam optimizer. We use 4 quantization levels and increase the channel size fromto control the bitrate. Compression performance is evaluated with Multi-Scale Structural Similarity (MS-SSIM) by bits per pixel (bpp) and we use MS-SSIM loss in Eq. 3 during training.
The first term is the distortion error and the second term is the cross entropy loss for pixelCNN++ model. is a hyperparameter and set to 1.
We compare our results with conventional codecs and recent deep learning based compression models. JPEG  results are obtained from ImageMagick444https://imagemagick.org. JPEG2000 results are from MATLAB implementation and BPG results are based on 4:2:0 chroma format555http://bellard.org/bpg. For deep learning based image compression models, we either collect from the released test results or plot the rate-distortion curves from the published papers.
|quantizer||Kodak dataset||Tecnick dataset|
|quantizer||Kodak dataset||Tecnick dataset|
|(a) Original||(b) JPEG||(c) JPEG2000||(d) Ballé |
|(e) BPG||(f) Ours (SQ)||(g) Ours (TCQ)|
4.4 Comparisons with previous works
Fig. 4 shows result comparisons between our approach and other image compression algorithms (Theis et. al. , Ballé et. al. , Agustsson et. al. , Johnston et. al. , Li et. al. , Mentzer et. al. , Cheng et. al. ) on two datasets. Despite the simplicity of our network, the results from our model with TCQ show its superior performance at low bit rates. At high bit rates, our results can achieve comparable performance to previous papers except for the latest results in Mentzer et. al.  and Cheng et. al. . It is probably because at high bit rates, we increase the number of channels of the model, but we do not finetune the training parameters.
4.5 Comparisons between TCQ and SQ
In Tab. 1, we compare the MS-SSIM and PSNR between TCQ and SQ using MS-SSIM loss for training. At the low bit rate (around 0.07 bpp), TCQ can achieve 0.008 in MS-SSIM (0.41dB in PSNR) and 0.005 in MS-SSIM (0.68dB in PSNR) higher than that from SQ on Kodak and Tecnick datasets respectively. We notice that at higher bit rates, the performance gap between TCQ and SQ is less obvious. As the number of channels increases, the learning ability of the model improves as well. The type of quantizer may not be that important for more complex models.
In Tab. 2, we compare the performance between TCQ and SQ using MSE loss as the distortion for training and is set to 0.01. A similar trend is observed where TCQ outperforms SQ at the same bit rate.
The pixelCNN++ model used in this paper is not optimal for entropy coding. In , a context model along with a hyper-network is used to predict and of a set of Gaussian models, which saves more bits than directly using the probability matrix. In our experiment, it gets 0.154 bpp for the model of 8 channels compared to pre-entropy coding with 0.25 bpp on the Kodak dataset.
4.6 Qualitative Comparisons
In Fig. 5, we show results from different codecs. Fig. 5 (a) is the original image. In (b), we can clearly see compression artifacts in the JPEG reconstructed image. In (c), (d) and (e), the shape of the cloud is very blurry. For BPG in (e), there are also some block artifacts in the green box sample. We notice that in (b), (c), (d) and (e), the sky lacks stripped cloud patterns at the upper left corner and there are less ripples in the areas below the trees. Our results in (f) and (g) get generally better perceptual quality.
In this paper, we incorporate TCQ into an end-to-end deep learning based image compression framework. Experiments show that our model can achieve comparable results to previous works. The comparisons between TCQ and SQ show that TCQ boosts both PSNR and MS-SSIM compared with SQ at low bit rates either using MSE loss or MS-SSIM loss for training.
-  (2017) Soft-to-hard vector quantization for end-to-end learning compressible representations. In Advances in Neural Information Processing Systems, pp. 1141–1151. Cited by: §1, §2.1, §2.2, §4.4.
-  (2019) Dsslic: deep semantic segmentation-based layered image compression. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2042–2046. Cited by: §1.
-  (2014) TESTIMAGES: a large-scale archive for testing visual devices and basic image processing algorithms.. In Eurographics Italian Chapter Conference, Vol. 1, pp. 3. Cited by: §4.1.
-  (2016) End-to-end optimized image compression. arXiv preprint arXiv:1611.01704. Cited by: §1, §2.1, §2.2, Figure 5, §4.4.
Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Cited by: §3.2.
-  (2019) Learning image and video compression through spatial-temporal energy compaction. In , pp. 10071–10080. Cited by: §4.4.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Figure 1.
Batch renormalization: towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, pp. 1945–1953. Cited by: §3.1.
-  (2000-12) Information technology – jpeg 2000 image coding system: core coding system. Standard International Organization for Standardization. Cited by: §1, §3.2, §3.2.
-  (2018) Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4385–4393. Cited by: §4.4.
-  (2018) Learning convolutional networks for content-weighted image compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3214–3223. Cited by: §2.1, §2.2, §4.4.
-  (1990) Trellis coded quantization of memoryless and gauss-markov sources. IEEE transactions on communications 38 (1), pp. 82–93. Cited by: §1.
-  (1994) On entropy-constrained trellis coded quantization. IEEE Transactions on Communications 42 (1), pp. 14–16. Cited by: §3.2.
-  (2018) Conditional probability models for deep image compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4394–4402. Cited by: §1, §2.1, §2.2, §3.2, §3.3, §4.4.
-  (2018) Joint autoregressive and hierarchical priors for learned image compression. In Advances in Neural Information Processing Systems, pp. 10771–10780. Cited by: §4.5.
-  (2017) Pixelcnn++: improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517. Cited by: §1, §3.3.
-  (2016) . In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §3.1.
-  (2017) Lossy image compression with compressive autoencoders. arXiv preprint arXiv:1703.00395. Cited by: §1, §2.1, §2.2, §4.4.
Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085. Cited by: §2.1.
-  (1992) The jpeg still picture compression standard. IEEE transactions on consumer electronics 38 (1), pp. xviii–xxxiv. Cited by: §4.3.
-  (2018) Attngan: fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1316–1324. Cited by: §3.1.
-  (2017) Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §4.1.