Scalable coding is different from non-scalable coding in the sense that the coded bitstream of scalable coding is partially decodable. That is to say that scalable image compression allows reconstructing complete images with more than one quality levels simultaneously by decoding appropriate subsets of the whole bitstream, which is called the “bitstream scalability”. In terms of the comparison with a simulcast codec , a scalable codec produces a cumulative set of hierarchical representations which can be combined for progressive refinement, instead of producing a multi-rate set of signals that are independent of each other. Thereby, scalable/progressive image compression is of great significance for image transmission and storage in practical using.
As the prosperity of deep learning, DNN-based models for lossy image compression [2, 3, 4, 5, 6, 7, 8, 9] have been widely explored recently. Toderici et al.  and Baig et al.  study the design of network architectures for deep image compression. Ball et al. and Agustsson et al. introduce the trainable quantization methods to help achieving end-to-end optimization. The works of [3, 7, 10, 4, 11] investigate the context models to improve the compression efficiency of arithmetic coding. In addition, the biologically-inspired joint nonlinearity, named generalized divisive normalization (GDN), is proposed in . And the structure of side information in learned image compression is well studied in [8, 9]. Specially, Agustsson et al. employ the generative models in improving the perceptual performance of learned image compression. Scalabilty, as a critical property, has not drawn much attention explicitly in deep-learning-based schemes, while it is supported by many prevailing conventional image/video compression standards [13, 14, 15].
In terms of the related works based on deep learning, Gregor et al.  introduce a novel hierarchical representation of images with a homogeneous deep generative model, which is considered as a ”conceptual compression” framework instead of a real compressor. The framework proposed by Toderici et al. 
can be viewed as the first DNN-based image compression model supporting bitstream scalability, in which recurrent neural networks (RNN) are employed to compress the residual information of the last reconstruction relative to the original image iteratively. However, still suffers from limited rate-distortion performance and complex encoding-decoding process due to the multi-iteration encoding-decoding within it.
In this paper, we are devoted to developing a more effective learned scalable image compression scheme. Except for obtaining better rate-distortion performance, we also aim to develop the functionality of enabling us to obtain the reconstructed images with different quality levels via a one-pass encoding-decoding simultaneously. Inspired by the Fine Granularity Scalability (FGS)  in MPEG-4 video standard, we adopt bit-plane decomposition to decompose the information before the input layer of neural networks. Bit-plane decomposition has an inherent advantage to transform the image to a hierarchical representation, in which an RGB image can be transformed into 24 bit-planes losslessly (8 bit-planes per channel). Two significant things can be observed: firstly, the sum of the information entropy  (shortly called “entropy”) of all bit-planes always exceeds the entropy of the corresponding original image; secondly, different bit-planes are not equal in their entropy. Theoretically, the carried information of a particular sequence of independent events is the sum of the information carried by each event. Therefore, there should be a correlation among different bit-planes, which is hard to be well considered in conventional bit-plane coding. In addition, the information carried by different bit-planes are asymmetrical due to their unequal entropy volumes. In this work, we make the first endeavour to employ deep neural networks in capturing the correlation among bit-planes in coding process. Moreover, for the information with different importance for reconstruction, we design a self-consistent architecture to disentangle them to form the hierarchical representations with an end-to-end optimization.
In summary, we have made three main contributions: (1) We propose a new DNN-based framework for learned scalable/progressive image compression, which can enable us to get the compressed results corresponding to multiple bitrates simultaneously through one-pass encoding and decoding. Note that the only one previous DNN-based image codec  can support bitstream scalability and it requires multi-iteration encoding and decoding to get compressed results with different quality levels. (2) We propose to involve the idea of bit-plane coding into a learnable scalable image codec, which benefits informantion decomposition for more effective hierarchical representation. (3) Within our proposed model, we design a LSTM-based architecture to disentangle the information of different bit-planes and achieve an end-to-end optimization for better rate-distortion performance, which goes beyond the regular using of LSTMs . Our proposed method outperforms the state-of-the-art DNN-based scalable image codec greartly in both PSNR and MS-SSIM metrics.
2 Proposed Method
We propose a deep-learning-based framework for scalable/progressive image compression. Within this framework, we adopt bit-plane decomposition to perform information decomposition coarsely and design two bidirectional gated units to disentangle the contextual information precisely.
2.1 Scalable Compression Framework
Bit-plane decomposition. As illustrated in Fig.1.(a), for a RGB image, we transfer each channel of it into bit-planes through bit-plane decomposition. Here can be viewed as so-called bit-depth. In this paper, we set for the RGB images in which the pixels are in the range of [0, 255]. For clarity, we represent the bit-plane for R, G, B channels as , , respectively. Illustratively, we denote the pixel located at in channel as , then we can obtain its corresponding value in the bit-plane as below:
where is the function to get the greatest integer less than or equal to . Inversely, we can reconstruct the original image from bit-planes by the following formula:
By the operation described in Eq.1, the original information from the RGB images is unevenly scattered into eight correlated but heterogeneous sub-spaces. However, Eq.2 shows that each bit-plane is of different importance for reconstruction. In addition, since the information entropy of each bit-plane is not equal, the information volume carried by each bit-plane is also different.
Encoder. Taking the bit-planes as the input of the encoder, we design a multi-branch architecture to learn the hierarchical representations. The network layers in each branch don’t share weights with the layers in other branches. As shown in Fig.1.(a), we leverage one convolutional layer to perform a preliminary transformation for each bit-plane independently, followed by three layers consisting of BAG-Units to further transform the carried information and yield
feature map partitions. Notice that there are bidirectional information flows between the two BAG-Units in adjacent branches via their hidden states. In both of the first convolutional layers and BAG-Units, we use the convolutional operation with the stride of 2 to achieve spatial down-sampling for the feature maps. The quantization module “” includes a convolutional layer with the stride of 1, a tanh activation and a binarization function defined in . In the final of the encoder, we define a switch function located in each branch. When the switch is “on”, the corresponding feature map will be retained as one participant of the compressed codes before entropy coding; when the switch is “off”, the corresponding feature map will be filled with zero values in the compressed codes before entropy coding. Finally, we employ the method of entropy coding in  for the codes from each branch individually to obtain the final compressed codes.
In terms of the transmission, only the compressed codes corresponding to the switch in “on” state should be transmitted from the sender to the receiver. The sum of the rates of all final compressed codes determines the compression rate and the highest reconstructed quality level in our scalable framework. The “basic bitrate”, namely the minimal coding rate can be achieved for one trained model before entropy coding, depends on the size of the binary feature map after quantization per branch in the encoder network. Some related recommended settings are listed in our supplementary materials.
Decoder. In our decoder, we leverage a convolutional layer with the stride of 1 to tune the dimensions of feature maps at the beginning and the ending of decoding respectively. Then we also use a multi-branch architecture to disentangle the contextual information for reconstruction in the procedure of decoding. Different from the method of down-sampling in the encoder, here we use pixel shuffle such a depth-to-spatial operation to implement spatial up-sampling. The same switch function in our decoder is used for controlling the quality level of the reconstructed image.
2.2 Contextual Information Disentanglement
In this section, we elaborate the architecture design for BAG-Unit and IBAG-Unit which play the role of disentangling contextual information in our scalable compression framework. We design a modified version of bidirectional-convolutional LSTM  as the gated unit in BAG-Unit and IBAG-Unit and go beyond the regular using of LSTMs.
The information of each bit-plane is heterogeneous with each other. We therefore propose to abandon the recurrent connections of LSTM units by using different units with unshared weights. Mathematically, let , , and denote the input, cell, hidden, and output states of the BAG-Unit/IBAG-Unit in the branch (see (b) and/or (c) in Fig.1). Clearly, we use the arrows above the symbol to distinguish two different directions of the information flows between adjacent BAG-Units/IBAG-Units. For the paired gated unit within the BAG-Unit/IBAG-Unit in the branch, their cell, hidden and output states can be updated as follows:
where “” denotes the convolutional operator, and “” denotes element-wise multiplication. The symbols , and represent the input gate, forget gate and output gate respectively, and indicates the input of the gated unit. Additionally, with different subscripts denote the weight matrices of different convolutional transformations, and
denotes the sigmoid activation function. The output state , which is also the input of “SE” block, is the result of concatenating the hidden states and of the gated units in two directions.
The gated units in BAG-Unit/IBAG-Unit play two important roles in disentangling the information: (1) capturing the correlations among different bit-planes, which benefits reducing rate for compact representations in compression; (2) helping to determine which level of feature partitions the information should be expressed according to its relative importance.
After the gated units, we employ the “Squeeze-and-Excitation” module to introduce a channel-wise attention for better fusing the information from different directions. Then we use a convolution layer with the stride of 1 to perform further transformation.
2.3 Training Algorithm
As a scalable image compression framework, it is required to be optimized for hierarchical reconstructed results with different quality levels meanwhile during training. Therefore, we use a specific approach to train this model, in which each training step contains a one-pass forward process of the encoder, a multi-pass forward process of the decoder and a one-pass backward process for parameters updating. Clearly, suppose that there are
quality levels in all, the loss function can be depicted by the formula below:
where denotes the reconstructed results at the level of , represents the output of the i-th branch, and refers to the distance function which is related to the distortion metrics used for evaluation. We weight the distortions under different code rates with a coefficient , which is set to generally. Typically, we take L1 norm and MS-SSIM (proposed in ) as the mentioned distance functions to train our model in this paper.
3 Experiment Results
3.1 Datasets and Settings
We use two sets of training data to train our proposed model, which includes the CoCo dataset  and a dataset composed of thirty thousand RGB images we collected from the word wide web. For the first dataset, we obtain (
can be taken as 32, 64 and 128) image patches for training by adopting the commonly used data augmentation strategies of random cropping and random horizontal flipping (with a probability of 0.5). For the second dataset, each image is first scaled by a random factor in [0.5, 1.5], followed by a random cropping and a random horizontal flipping (with a probability of 0.5). Then, we perform filtering the obtained image patches by using the sobel operator and cany operator to reduce the ratio of the training samples with too simple textures.
We implement three-stage training procedures with different patch sizes at each stage for our proposed models. We first pre-train our model by using
patches from the first dataset and perform stochastic gradient descent with minibatches of 32 by adopting Adam optimizer with a learning rate of. Then we train our model by using patches from the second dataset. At this stage, we set the size of minibatch as 32 and adopt Adam optimizer with a initial learning rate of and a weight decay of . We finally perform fine-tuning with image patches from the second dataset. At this stage, we tune of Eq.10 in main text in a small range for improving the performance with respect to some specific bitrates.
3.2 Rate-distortion Performance
We evaluate our proposed models on the Kodak dataset and illustrate the best rate-distortion performance across multiple trained models under different bitrates in Fig.2. By involving bit-plane decomposition and disentangling the information with the BCD-Net, in both PSNR and MS-SSIM metrics, our proposed model achieves a significant improvement across different bit-rates compared to the current state-of-the-art DNN-based scalable image compression model. Relative to the conventional scalable image codec JPEG2000, our proposed model outperforms it in MS-SSIM metric across different bit-rates, and it also shows its advantage in PSNR metric under low bit-rates.
3.3 Ablation Study
|Rate (bpp) & Distortion||0.0625||0.125||0.1875||0.25|
|(1) Unidirectional Encoder-decoder||22.6267||0.7630||25.3592||0.8448||27.1178||0.8840||27.7594||0.9016|
|(2) with the regular using of LSTMs||25.3584||0.8175||26.5429||0.8707||27.2956||0.8986||27.4378||0.9030|
|(3) w/o bit-plane decomposition||25.1074||0.8203||26.5585||0.8720||27.2947||0.8929||27.6120||0.9036|
|(4) w/o the “SE” modules||25.6104||0.8163||26.8601||0.8721||27.5019||0.8948||27.8937||0.9051|
|(5) w/o the GDN/IGDN||25.3693||0.8160||26.6676||0.8715||27.3263||0.8932||27.7023||0.9027|
|(6) Fully-equipped BCD-Net||25.8295||0.8297||27.3045||0.8785||27.9695||0.8999||28.3327||0.9101|
To further investigate the effectiveness of the technical components within our proposed scheme, we construct a series of experiments in comparison to the following experimental cases: (1) We implement four different combinations of an unidirectional encoder and an unidirectional decoder, in which “E” and “D” denote the encoder and the decoder respectively, and the symbols and
represent two directions of the information flow; (2) We take the LSTM with recurrent connections as the gated units inside BAG-Units and IBAG-Units; (3) We replace the bit-plane decomposition with convolution and slicing operations; (4) We take the SE blocks away from BAG-Units and IBAG-Units. (5) We replace the GDN and IGDN inside BAG-Units and IBAG-Units with the non-linear activation function leaky relu. For each experimental case, we train one model with the basic bitrate ofbits per pixel (bpp). All experimental cases here are optimized for PSNR metric under the same training settings. The evaluation results on the Kodak dataset are reported in Table.1.
As shown in Table.1, the rate-distortion performance of codec decline severely when we apply the unidirectional network topology in encoder and decoder, which shows that the bidirectional information flow is crucial for context disentanglement. Bidirectional message passing helps for determining where should be expressed for the information with different importance and considering the correlations among the representations at different levels. The second experiment demonstrates that the LSTMs with unshared parameters are more suitable for mapping heterogeneous information hidden in different subspaces to latent representations when compared to the regular using of LSTMs with recurrent connections. Also, we can find that bit-plane decomposition is better than convolution and slicing operations in providing a coarse but effective information decomposition before the deep-learning-based transformation. The results of ablation study also suggest that “Squeeze-and-Excitation” block can lead to better information fusion by introducing the channel-wise attention. Additionally, similar with Ball et al.’s work , GDN/IGDN is also effective within our scheme in simplifying learning by Gaussianizing image densities.
In this paper, we study the deep-learning-based scalable image codec. We propose to involve bit-plane decomposition in a DNN-based compression framework to decompose the original information coarsely. Then we design the Bidirectional Context Disentanglement Network (BCD-Net) to learn more effective hierarchical representations for scalable/progressive compression. Consequently, our proposed model can compress and reconstruct the images with different quality levels simultaneously through a one-pass encoding-decoding. And it outperforms the state-of-the-art of DNN-based scalable image codecs in both PSNR and MS-SSIM metrics. It also outperforms the conventional scalable image codec in MS-SSIM metric across different bitrates and in PSNR metric under low bitrates.
-  Steven Ray McCanne and Martin Vetterli, Scalable compression and transmission of internet multicast video, University of California, Berkeley, 1996.
-  George Toderici, Sean M O’Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar, “Variable rate image compression with recurrent neural networks,” arXiv preprint arXiv:1511.06085, 2015.
-  Johannes Ballé, Valero Laparra, and Eero P Simoncelli, “End-to-end optimized image compression,” The 5th International Conference on Learning Representations, 2017.
-  George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell, “Full resolution image compression with recurrent neural networks,” in , 2017, pp. 5306–5314.
Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár,
“Lossy image compression with compressive autoencoders,”The 5th International Conference on Learning Representations, 2017.
Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu
Timofte, Luca Benini, and Luc V Gool,
“Soft-to-hard vector quantization for end-to-end learning compressible representations,”in Advances in Neural Information Processing Systems, 2017, pp. 1141–1151.
-  Mohammad Haris Baig, Vladlen Koltun, and Lorenzo Torresani, “Learning to inpaint for image compression,” in Advances in Neural Information Processing Systems, 2017, pp. 1246–1255.
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick
“Variational image compression with a scale hyperprior,”The 6th International Conference on Learning Representations, 2018.
-  David Minnen, Johannes Ballé, and George Toderici, “Joint autoregressive and hierarchical priors for learned image compression,” In NeurlPS, 2018.
-  Oren Rippel and Lubomir Bourdev, “Real-time adaptive image compression,” arXiv preprint arXiv:1705.05823, 2017.
-  Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and Luc Van Gool, “Conditional probability models for deep image compression,” arXiv preprint arXiv:1801.04260, 2018.
-  Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, and Luc Van Gool, “Generative adversarial networks for extreme learned image compression,” arXiv preprint arXiv:1804.02958, 2018.
-  Athanassios Skodras, Charilaos Christopoulos, and Touradj Ebrahimi, “The jpeg 2000 still image compression standard,” IEEE Signal processing magazine, vol. 18, no. 5, pp. 36–58, 2001.
-  Weiping Li, “Overview of fine granularity scalability in mpeg-4 video standard,” IEEE Transactions on circuits and systems for video technology, vol. 11, no. 3, pp. 301–317, 2001.
-  Yan Ye and Pierre Andrivon, “The scalable extensions of hevc for ultra-high-definition video delivery,” IEEE MultiMedia, vol. 21, no. 3, pp. 58–64, 2014.
-  Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra, “Towards conceptual compression,” in Advances In Neural Information Processing Systems, 2016, pp. 3549–3557.
-  Aaron Wyner, “Recent results in the shannon theory,” IEEE Transactions on information Theory, vol. 20, no. 1, pp. 2–10, 1974.
-  Sepp Hochreiter and Jürgen Schmidhuber, Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-excitation networks,” arXiv preprint arXiv:1709.01507, 2017.
-  Qingshan Liu, Feng Zhou, Renlong Hang, and Xiaotong Yuan, “Bidirectional-convolutional lstm based spectral-spatial feature learning for hyperspectral image classification,” Remote Sensing, vol. 9, no. 12, pp. 1330, 2017.
-  Zhou Wang, Eero P Simoncelli, and Alan C Bovik, “Multiscale structural similarity for image quality assessment,” in Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on. Ieee, 2003, vol. 2, pp. 1398–1402.
-  Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision. Springer, 2014, pp. 740–755.