The recent years have witnessed the great potential of deep learning for video compression. In this paper, we propose the Hierarchical Learned Video Compression (HLVC) approach with three hierarchical quality layers and recurrent enhancement. To be specific, the frames in the first layer are compressed by image compression method with the highest quality. Using them as references, we propose the Bi-Directional Deep Compression (BDDC) network to compress the second layer with relatively high quality. Then, the third layer frames are compressed with the lowest quality, by the proposed Single Motion Deep Compression (SMDC) network, which adopts a single motion map to estimate the motions of multiple frames, thus saving the bit-rate for motion information. In our deep decoder, we develop the Weighted Recurrent Quality Enhancement (WRQE) network with the inputs of both compressed frames and bit stream. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame information for enhancement. In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides, respectively. Finally, the experiments validate that our HLVC approach advances the state-of-the-art deep video compression methods, and outperforms x265 low delay P very fast mode in terms of both PSNR and MS-SSIM. The project page is at https://github.com/RenYang-home/HLVC.READ FULL TEXT VIEW PDF
The past few years have witnessed great success in applying deep learnin...
The deep-learning-based video coding has attracted substantial attention...
Recent works have successfully applied some types of Convolutional Neura...
We address end-to-end learned video compression with a special focus on
This paper proposes a Perceptual Learned Video Compression (PLVC) approa...
Most of the existing neural video compression methods adopt the predicti...
Image compression is a method to remove spatial redundancy between adjac...
A framework that you can easily test videos using various plugins, and even you can write your own plugin.
HLVC submodule for VCtest
Nowadays, video becomes more and more popular over the Internet. According to the Cisco Forecast , video generates 70% to 80% traffic of mobile data. The proportion of high resolution videos is also rapidly increasing. Therefore, it is necessary to study on video compression to transmit high quality video over the bandwidth-limited Internet. During the past decades, plenty of video compression standards were proposed, such as H.264 , H.265 , etc. However, these traditional codecs are handcrafted and cannot be optimized in an end-to-end manner. Therefore, jointly optimizing the compression framework may further improve the rate-distortion performance.
In recent years, there has been increasing interest in compressing video with Deep Neural Networks (DNNs)[8, 38, 9, 22, 13]. For example, Lu et al.  proposed using optical flow for motion compensation and applying auto-encoders to compress the flow and residual. Then, Habibian et al. 
proposed a 3D auto-encoder for video compression with autoregressive prior. In these methods, the models are trained with one loss function and applied on all frames. As such, they fail to generate hierarchical quality layers, in which high quality frames are beneficial for compression and post-processing of other frames.
This paper proposes the Hierarchical Learned Video Compression (HLVC) approach with three hierarchical quality layers and a recurrent enhancement network. As illustrated in Figure 1, the frames in layers 1, 2 and 3 are compressed with the highest, medium and the lowest quality, respectively. The benefits of hierarchical quality are two-fold: First, the high quality frames, which provide high quality references, are able to improve the compression performance of other frames at the encoder side; Second, because of the high correlation among neighboring frames, at the decoder side, the low quality frames can be enhanced by making use of the advantageous information in high quality frames. The enhancement improves quality without bit-rate overhead, thus improving the rate-distortion performance. For example, the frames 3 and 9 in Figure 1, which belong to layer 3, are compressed with low quality and bit-rate. Then, our recurrent enhancement network significantly improves their quality taking advantage of higher quality frames, e.g., frames 1 and 6. As a result, the frames 3 and 9 reach comparable quality to frame 6 in layer 2, but consume much less bit-rate. Therefore, our HLVC approach with hierarchical quality and recurrent enhancement achieves efficient video compression.
In our HLVC approach, layer 1 is compressed by image compression method. For layer 2, we propose the Bi-Directional Deep Compression (BDDC) network, which uses the compressed frames in layer 1 as bi-directional references. Then, because of the correlation between motions of neighboring frames, we propose compressing layer 3 by our Single Motion Deep Compression (SMDC) network. Our SMDC network applies a single motion map to estimate motions among several frames to reduce the bit-rate for encoding motion maps. Finally, we develop the Weighted Recurrent Quality Enhancement (WRQE) network based on , in which the recurrent cells are weighted by quality features to reasonably apply multi-frame information for recurrent enhancement. The experiments show that our HLVC approach achieves the state-of-the-art performance among learned video compression methods, and outperforms the x265 low delay P very fast mode. Moreover, the ablation studies prove the effectiveness of each network in our HLVC approach.
Deep image compression. In the past decades, plenty of handcrafted image compression standards were proposed, such as JPEG , JPEG 2000  and BPG . Recently, DNNs have also been successfully applied to improve the performance of image compression [30, 31, 1, 29, 2, 3, 25, 24, 18, 14, 17]. Particularly, Ballé et al. [2, 3] proposed end-to-end DNN frameworks for image compression, applying the factorized-prior 
and hyperprior density models to estimate cross entropy. Later, hierarchical prior  and context-adaptive  entropy models were designed to further advance the rate-distortion performance, and outperform the state-of-the-art traditional image codec. Moreover, recurrent structures are also adopted in image compression networks [30, 31, 14].
Deep video compression. Based on the traditional image compression standards, several handcrafted algorithms, e.g., MPEG , H.264  and H.265 , were standardized for video compression. In recent years, deep learning also attracted more attention in video compression. Many approaches [40, 21, 11, 19, 20] were proposed to replace the components in traditional video codecs by DNN. For instance, Liu et al. 
utilized DNN in the fractional interpolation of motion compensation, and[11, 19, 20] use DNNs to improve the in-loop filter. However, these methods only advance the performance of one particular module, and each module in video compression framework cannot be jointly optimized.
Most recently, several end-to-end deep video compression methods have been proposed [8, 7, 38, 9, 22, 13]. Specifically, Wu et al.  proposed predicting frames by interpolation from reference frames, and the image compression network of  is applied to compress the residual. In 2019, Lu et al.  proposed the Deep Video Compression (DVC) method, in which the optical flow is used to estimate the temporal motion, and two auto-encoders are employed to compress the motion and residual, respectively. Meanwhile, in , the spatial-temporal energy compaction is added into the loss function to improve the performance of video compression. Later, Habibian et al.  proposed the rate-distortion auto-encoder, which uses autoregressive prior for video entropy coding.
Among the existing methods, only Wu et al.  uses the hierarchical prediction. Nevertheless, none of them learns to compress video with hierarchical quality, and therefore they fail to provide high quality references for the compression of other frames, and cannot take advantage of high quality information in multi-frame post-processing.
Enhancement of compressed video. Since lossy video compression inevitably leads to artifacts and quality loss, some works focus on enhancing the quality of compressed video [34, 45, 43, 44, 42, 35, 23]. Among them, [34, 45, 43] are single frame approaches with the input of one frame each time. Then, Yang et al. [44, 42]
proposed multi-frame quality enhancement approaches, which make use of the inter-frame correlation. Besides, the deep Kalman filter was proposed in to reduce compression artifacts.
Nevertheless, the above enhancement methods are all designed for the post-processing of existing traditional video codecs. Therefore, in the multi-frame approaches [44, 42], the accurate frame quality cannot be obtained, and only can be estimated with prediction error. In our HLVC approach, the compression quality of each frame is encoded in the bit stream, which is input together with the compressed frames to our enhancement network, making enhancement guided by accurate frame quality and as a component of our deep decoder in the whole video compression framework.
Figure 2 shows the framework of our HLVC approach on the first Group Of Picture (GOP) as an example, and our framework is the same for each GOP. In our HLVC approach, the frames are compressed as three hierarchical quality layers, namely layers 1, 2 and 3, with decreasing quality.
Layer 1. The first layer (red frames in Figure 2) is encoded by image compression method, and denotes the compressed frames. Similar to the “I frames” in traditional codecs [37, 28], the frames in layer 1 consume the highest bit-rates, and are with the highest compression quality. As such, they are able to stop the error propagation during video encoding and decoding. More importantly, these frames provide high quality information, which benefits the compression and enhancement of neighboring frames.
Layer 2. Then, the frames of layer 2 (orange frames in Figure 2) are in the middle of two layer 1 frames. We propose the Bi-directional Deep Compression (BDDC) network to compress layer 2. Our BDDC network takes the previous and the upcoming compressed frames in layer 1 as bi-directional references. In our HLVC approach, we compress layer 2 as the medium quality layer, which also provides beneficial information to compress and enhance low quality frames in layer 3. Our BDDC network is introduced in Section 3.2.
Layer 3. The remaining frames belong to layer 3 (yellow frames in Figure 2), which are compressed with lowest quality and contribute the least bit-rate. In the latest deep video compression approaches, e.g., Wu et al. , DVC , each frame requires at least one motion map for motion compensation. However, the motion between continuous frames is correlated, thus encoding one motion map for each frame leads to bits redundancy. Hence, we propose the Single Motion Deep Compression (SMDC) network, which applies a single compressed motion to describe the motion between multiple frames to reduce the bit-rate. Note that, the frames to are compressed in the same manner as to , so they are omitted in Figure 2. Our SMDC network is introduced in Section 3.3.
Enhancement. Then, because of the high correlation among video frames , as Figure 2 shows, we develop the Weighted Recurrent Quality Enhancement (WRQE) network, in which the recurrent cells are weighted by quality features to reasonably leverage multi-frame information. Especially, the quality of layer 3 can be significantly improved by taking advantage of the high quality information in layers 1 and 2. It does not lead to any bits overhead, so it is equivalent to saving bit-rate, especially on low quality frames. Note that, WRQE is a part of our deep decoder, which takes both compressed frames and encoded bits as inputs. Our WRQE network is detailed in Section 3.4.
Our BDDC network for compressing layer 2 is shown in Figure 3. Here, we also use the first GOP as an example. In BDDC, the Motion Estimation (ME) subnet is first developed to capture the temporal motion between the reference and target frames. Since the interval between the frames in layers 1 and 2 is long (e.g., 5 frames in Figure 3), we follow  to apply a pyramid network to handle the large motions. Note that, since the backward warping is used in our approach, we estimate the backward motion. For example, in Figure 3, the outputs of our ME subnet are the motions from to (denoted as ) and from to (denoted as ), respectively.
Given the estimated motions, the auto-encoder is utilized for Motion Compression (MC), in which the encoder and decoder networks are defined as and , respectively. Because of the consistency among video frames, there exists correlation between the motions of different frames. Therefore, we propose concatenating the bi-directional motions as the input to the auto-encoder , which transforms the input to a latent representation . Then, is quantized to , and is fed to the decoder to generate compressed motion. Here, is encoded to bits by arithmetic coding . As such, defining and as the compressed motions, our MC subnet can be formulated as
Next, the reference frames are warped to the target frame using the compressed motions, and the CNN-based Motion Post-processing (MP) subnet (denoted as ) is designed to merge the warped frames for motion compensation. Defining as the backward warping operation, the motion compensation can be formulated as
where denotes the compensated frame. Finally, the residual between the compensated frame and the raw frame is compressed by the Residual Compression (RC) subnet. Similar to the MC subnet, there are encoder () and decoder () networks in RC. Using to denote the quantized latent representation, our RC subnet can be written as
where represents the compressed frame of . In our RC subnet, is encoded to bits using arithmetic coding, which contributes to the bits of layer 2 together with the encoded . Besides, the compression quality is calculated and included in the bit stream111There is only a half-precision floating-point number for each frame, which is eligible among the whole bit stream., and is to be used in our deep decoder (refer to Section 3.4). In this paper, the Multi-Scale Structural SIMilarity (MS-SSIM) 
and the Peak Signal-to-Noise Ratio (PSNR) are used to evaluate quality.
In the following, other frames are compressed as layer 3 by the proposed SMDC network, using the nearest compressed frames in layers 1 and 2 as references. Figure 4 shows the architecture of our SMDC network on and as an example. We can see from Figure 4 that the frame is first compressed by a DNN with similar architecture of BDDC, which contains the ME, MC, MP and RC subnets, and the compressed frame is obtained. As aforementioned, due to the correlation of motions among multiple neighboring frames, the motion between and is proposed to predict the motions between and or . As such, the frame can be compressed with the reference frames of and , without bits consumption for motion map, thus improving the rate-distortion performance.
In our SMDC network, we propose applying the inverse motion for motion prediction. Specifically, a motion map can be defined as , where and denote the coordinates, while and are the horizontal and vertical motion maps, respectively. For , the inverse motion can be expressed as
In (7), describes that the pixel at moves to the new position of , and therefore the value of should be assigned to at the new position. For simplicity, we define the inverse operation as , i.e., .
Recall that the backward warping is adopted in our approach, so the motion for compressing is from to , which is defined as . Similarly, using and as reference frames, the motions from to (denoted as ) and from to (denoted as ) are required for the compression of . Note that, since the raw frame is not available at the decode side, cannot be recovered in decoding. Hence, the compressed motion is used to predict and . Given and the inverse operation, can be predicted as
In a similar way, is obtained by
Then, the same as (3) and (4), the reference frames are warped and fed into the MP subnet together with the predicted motions to generate the motion compensated frame . Finally, the residual () is compressed by the RC subnet to obtain the compressed frame . Here, the compressed quality and are also included in the bit stream, which are to be utilized in our WRQE network.
Finally, at the decoder side, the WRQE network is developed for quality enhancement. Our WRQE network is designed based on the QG-ConvLSTM method  with a spatial-temporal structure, which uses the quality-gated cell  to exploit the multi-frame information. The architecture of our WRQE network is illustrated in Figure 5.
Different from , we adopt residual blocks in the spatial and reconstruction networks, and employ the skip connections to improve the enhancement performance. More importantly, As discussed in [44, 42], the significance of a frame for enhancing other frames depends on its relative quality compared with others. However, in [44, 42], the accurate quality of each frame cannot be obtained in decoder. On the contrary, the bit stream, which encodes the compression quality, is fed together with compressed frames to our WRQE network. Therefore, we decode the compression quality and the bit-rate from our bit stream, and utilize as the quality feature. We feed into the weights generator, and obtain the weights , which are input to the quality-gated cell  together with the spatial features .
As shown in Figure 5, the weights are learned to reasonably control to forget previous memory and update the current information. Specifically, on high quality frames, the memory is expected to multiply a small to forget previous low quality information, but the update weight should be large to add its high quality information to the memory for enhancing other frames. In contrast, the large and small
are expected on low quality frames. Furthermore, since the sigmoid function leads to, the previous information decreases in the memory along the frame distance. This matches the fact the frames with longer distance are less correlated, and therefore are less useful for quality enhancement. As such, in the quality-gated cell, the frames with different quality contribute to the memory with different significance, making our WRQE network reasonably leverage multi-frame information for quality enhancement.
In the training phase, we use the density model of  to estimate the bit-rate for encoding and in (1) and (5), and define the estimated bit-rate as . Then, we follow [24, 22] to formulate the rate-distortion optimization as
is the hyperparameter to make the trade-off between distortionand bit-rate .
It can be seen from (10) that the compression quality of the trained model depends on the hyperparameter , i.e., larger results in higher quality and higher bit-rate. Therefore, to achieve the hierarchical compression quality in our HLVC approach, different values are applied for our BDDC and SMDC networks, which compress layers 2 and 3, respectively. To be specific, given (10) and the estimated bit-rates, we set the loss function of our BDDC network as
and the loss for our SMDC network is
In (12), and are the representations in the RC networks of and , respectively. In (11) and (12), we use the Mean Square Error (MSE) as the distortion, i.e., , when training our HLVC approach for PSNR. We apply when optimizing for MS-SSIM. More importantly, we set in (11) and (12) to make our approach learn to compress layer 2 with higher quality than layer 3, thus achieving the hierarchical quality.
Finally, we train our WRQE network by minimizing the loss function of
where is the step length of our recurrent network. Because of the bi-directional recurrent structure, larger leads to longer decoding latency and also longer training time. Therefore, we set as 11 (the interval of frames in layer 1) in both training and inference phases.
|Dataset||BDBR (%) calculated by MS-SSIM||BDBR (%) calculated by PSNR|
|Optimized for PSNR||Optimized for MS-SSIM||Optimized for PSNR|
|||||HLVC||||||w/o WRQE||HLVC||||||w/o WRQE||HLVC|
The results of Cheng et al.  are calculated by the data provided by the authors, which are tested on the first 81 frames of each video.
The experiments are conducted to validate the effectiveness of our HLVC approach. Our BDDC and SMDC networks are trained on the Vimeo-90k  dataset, and we collect 142 videos from Xiph  and VQEG  to train our WRQE network. We test our HLVC approach on the JCT-VC  (Classes B, C and D) and the UVG  datasets, which are not overlapping with our training sets. Among them, the UVG and JCT-VC Class B are high resolution () datasets, and the JCT-VC Classes C and D are with the resolution of and , respectively. For a fair comparison with , we follow  to test JCT-VC videos on the first 100 frames, and test UVG videos on all frames. The quality is evaluated in terms of MS-SSIM and PSNR. We train the models with = 8, 16, 32 and 64 for MS-SSIM, and with = 256, 512, 1024 and 2048 for PSNR. To achieve hierarchical quality, we set . We compare our HLVC approach with the latest learned video compression methods. Among them, Habibian et al.  (ICCV’19) and Cheng et al.  (CVPR’19) are optimized for MS-SSIM. DVC  (CVPR’19) and Wu et al.  (ECCV’18) are optimized for PSNR. Besides, the video coding standards H.264  and H.265  are also included in comparison. We follow  to use x264 and x265 Low-Delay P (LDP) very fast mode, with the same the GOP size as our HLVC approach (i.e., GOP = 10 in Figure 2) on all test videos222Please refer to https://github.com/RenYang-home/HLVC for detailed configurations.
Rate-distortion curve. Figure 6 demonstrates the rate-distortion curves on the JCT-VC and UVG datasets. The quality is evaluated in terms of MS-SSIM and PSNR, and the bit-rate is calculated by bits per pixel (bpp). As shown in Figure 6 (a) and (b), our MS-SSIM model outperforms all learned approaches, and reaches better performance than H.264 and H.265. Especially, at low bit-rate, Habibian et al.  and DVC  perform worse than H.265 on UVG, and DVC  is only comparable with H.265 at low bit-rate on JCT-VC. On the contrary, the rate-distortion curves of our HLVC approach are obviously above H.265 from low to high bit-rates. The PSNR curves are illustrated in Figure 6 (c) and (d). It can be seen that our PSNR model achieves better performance than the latest PSNR optimized methods DVC  and Wu et al. , and also outperforms H.265 on the JCT-VC dataset. On UVG, we reach better performance than H.265 at high bit-rate.
Bit-rate reduction. Furthermore, we evaluate the Bjøntegaard Delta Bit-Rate (BDBR)  with the anchor of H.265. The BDBR calculates the average bit-rate difference in comparison with the anchor, and lower BDBR values indicate better performance. Table 1 tabulates BDBR calculated by MS-SSIM and PSNR, in which the negative numbers indicate reducing bit-rate compared to the anchor, thus outperforming H.265, and the bold numbers are the best results among all learned methods.
In Table 1, for a fair comparison on MS-SSIM with the PSNR optimized methods DVC  and H.265, we first report the BDBR of our PSNR model in terms of MS-SSIM. As Table 1 shows, our PSNR model outperforms H.265 on MS-SSIM with the average BDBR of , which is also better than DVC (BDBR = ). On JCT-VC Class C, our PSNR model even obviously outperforms the MS-SSIM optimized method Cheng et al.  in terms of MS-SSIM. Furthermore, our MS-SSIM model successfully outperforms all existing learned methods on MS-SSIM, and averagely reduces the bit-rate of H.265 by . More importantly, the performance of our MS-SSIM model before quality enhancement (without WRQE) (BDBR = ) is still significantly better than all previous methods. In conclusion, our HLVC approach achieves the state-of-the-art MS-SSIM performance among learned and handcrafted video compression methods.
Table 1 also shows the BDBR results calculated by PSNR. As shown in Table 1, our PSNR model performs best among all learned methods in terms of PSNR. Especially, we outperform the latest PSNR method DVC  on all test sets. In comparison with H.265, our PSNR model averagely reduces the bit-rate by , despite has bit-rate overhead on JCT-VC Class C. Among the 20 videos in our test sets, our PSNR model beats H.265 on 14 videos in terms of PSNR. Besides, as shown in Table 1, our PSNR model without WRQE still outperforms the latest PSNR method DVC . To summary, our HLVC approach outperforms all existing learned approaches on PSNR, and reaches better performance than H.265 (x265 LDP very fast mode) on most videos. Furthermore, the visual results are illustrated at https://github.com/RenYang-home/HLVC.
The ablation studies are conducted to prove the effectiveness of each component in our HLVC approach. We define the baseline model as our approach without Hierarchical Quality (HQ) (using the models trained by the same for all frames), without Single Motion (SM) strategy (compress one motion map for each frame) and without our enhancement network WRQE. Then, we analyze the performance of the baseline model and the ones added these components successively, i.e., “baseline+HQ”, “baseline+HQ+SM” and our whole framework (“baseline+HQ+SM+WRQE”). Moreover, we also discuss the enhancement on non-hierarchical video (“baseline+RE”). The ablation results are illustrated in Figure 7.
Hierarchical quality. Figure 7 shows that the model of “baseline+HQ” obviously outperforms the baseline model, indicating the effectiveness of applying the hierarchical quality to improve the compression performance. Besides, Figure 8 shows the changes of bit-rate and PSNR on high quality (layers 1 and 2) and low quality (layer 3) frames from the baseline to “baseline+HQ”. It can be seen that, on layers 1 and 2, employing hierarchical quality enlarges both the bit-rate and PSNR. On layer 3, “baseline+HQ” achieves higher PSNR than the baseline but even with lower bit-rate. It is because layers 1 and 2 in “baseline+HQ” provide high quality reference for compressing layer 3. Since the frames in layer 3 are the majority in video, applying hierarchical quality improves the compression performance.
Single motion strategy. Then, as shown in Figure 7, the model with our SMDC network (“baseline+HQ+SM”) further improves the performance of “baseline+HQ” by reducing the bits for motion maps. For example, in “baseline+HQ”, the average bit-rate for motion information is bpp at , and the total bit-rate is bpp. After using our SMDC network, in“baseline+HQ+SM”, the bits consumed for motion reduce to bpp, which is lower than without SMDC, and the total bit-rate also decreases to bpp. Meanwhile, the PSNR improves from dB (“baseline+HQ”) to dB (“baseline+HQ+SM”), since more bits can be allocated on residual coding. This validates that our SMDC network successfully reduces the redundancy of video motion, and benefits the compression performance.
Recurrent enhancement. As we can see from Figure 7, our WRQE network (“baseline+HQ+SM+WRQE”) effectively further enhances the quality based on “baseline+HQ+SM”. As the example shown in Figure 9, our WRQE network significantly enhances compression quality, especially on low quality frames, e.g., the PSNR improvement is around 1dB on frames 3 and 9. Figure 9 also shows the learned weights of and . It can be seen that on high quality frames, our WRQE network learns to generate larger and smaller to decrease the previous memory and update its helpful information to the memory, and these are in opposite on low quality frames. Moreover, as the visual results shown in Figure 9, frame 3 surfers from severe distortion due to the low bit-rate, while frame 6 with higher quality are highly correlated with frame 3. Then, in WRQE, because of the large of frame 6, large proportion of its information is updated to the memory. Therefore, it is able to recover the lost information in frame 3 and significantly enhance the quality. These results validate the effectiveness of our WRQE network.
Benefits of hierarchy for enhancement. Finally, we show the result of our WRQE network on the baseline model without hierarchical quality. As shown in Figure 7, the quality improvement from the baseline to “baseline+WRQE” is much less than that on our hierarchical quality method (“baseline+HQ+SM” to our HLVC approach). This is caused by the similar quality on each frame in the non-hierarchical model, so there is no high quality reference to help the enhancement of other frames. This shows that the proposed hierarchical quality structure facilitates our WRQE network on enhancement, and as aforementioned, our WRQE network also successfully learns to reasonably make use of the hierarchical quality. As a result, our whole framework achieves the state-of-the-art performance among learned video compression methods, and averagely outperforms the H.265 (x265 LDP very fast mode).
This paper has proposed a learned video compression approach with hierarchical quality and recurrent enhancement. To be specific, we proposed compressing frames in layers 1, 2 and 3 with decreasing quality, and using image compression method, the proposed BDDC and SMDC networks for the compression of the three layers, respectively. Then, we also developed the WRQE network with quality weighted recurrent cell for multi-frame enhancement. The experiments validated the effectiveness of our HLVC approach. The same as all learned compression methods, although our networks are end-to-end DNNs, we manually set the frame structure in our approach. It is a promising future work to develop DNNs which learn to automatically design the prediction and hierarchical structures, and thus optimize the rate-distortion performance on whole video.
Soft-to-hard vector quantization for end-to-end learning compressible representations. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1141–1151. Cited by: §2.
A convolutional neural network approach for post-processing in HEVC intra coding. In Proceedings of the International Conference on Multimedia Modeling (MMM), pp. 28–39. Cited by: §2.
Video compression with rate-distortion autoencoders. In Proceedings of the IEEE International Conference of Computer Vision (ICCV), Cited by: §1, §2, Table 1, §4.1, §4.2.
Conditional probability models for deep image compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4394–4402. Cited by: §2, §3.5.
Variable rate image compression with recurrent neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §2.