Feedback Recurrent Autoencoder for Video Compression

by   Adam Golinski, et al.

Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems. Specifically, autoencoder based learned image or video compression solutions are emerging as strong competitors to traditional approaches. In this work, We propose a new network architecture, based on common and well studied components, for learned video compression operating in low latency mode. Our method yields state of the art MS-SSIM/rate performance on the high-resolution UVG dataset, among both learned video compression approaches and classical video compression methods (H.265 and H.264) in the rate range of interest for streaming applications. Additionally, we provide an analysis of existing approaches through the lens of their underlying probabilistic graphical models. Finally, we point out issues with temporal consistency and color shift observed in empirical evaluation, and suggest directions forward to alleviate those.



There are no comments yet.


page 12

page 18

page 19

page 23

page 24

page 28

page 29


Insights from Generative Modeling for Neural Video Compression

While recent machine learning research has revealed connections between ...

Feedback Recurrent AutoEncoder

In this work, we propose a new recurrent autoencoder architecture, terme...

Neural Weight Step Video Compression

A variety of compression methods based on encoding images as weights of ...

Video Compression With Rate-Distortion Autoencoders

In this paper we present a a deep generative model for lossy video compr...

ELF-VC: Efficient Learned Flexible-Rate Video Coding

While learned video codecs have demonstrated great promise, they have ye...

Deep Generative Models for Distribution-Preserving Lossy Compression

We propose and study the problem of distribution-preserving lossy compre...

Generalized Difference Coder: A Novel Conditional Autoencoder Structure for Video Compression

Motion compensated inter prediction is a common component of all video c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With over 60% of internet traffic consisting of video [1], lossy video compression is a critically important problem, promising reductions in bandwidth, storage, and generally increasing the scalability of the internet. Although the relation between probabilistic modelling and compression has been known since Shannon, video codecs in use today are only to a very small extent based on learning and are not end-to-end optimized for rate-distortion performance on a large and representative video dataset.

The last few years have seen a surge in interest in novel codec designs based on deep learning [27, 43, 32, 12, 26, 13], which are trained end-to-end to optimize rate-distortion performance on a large video dataset, and a large number of network designs with various novel, often domain-specific components have been proposed. In this paper we show that a relatively simple design based on standard, well understood and highly optimized components such as residual blocks [14], convolutional recurrent networks [34], optical flow warping and a PixelCNN [38] prior yields state of the art rate-distortion performance.

We focus on the online compression setting, where video frames can be compressed and transmitted as soon as they are recorded (in contrast to approaches which require buffering several frames before encoding), which is necessary for applications such as video conferencing and cloud gaming. Additionally, in both applications, the ability to finetune the neural codec to its specific content holds promise to further significantly reduce the required bandwidth [12].

There are two key components to our approach beyond the residual block based encoder and decoder architecture. Firstly, to exploit long range temporal correlation, we follow the principle proposed in Feedback Recurrent AutoEncoder (FRAE) [45]

, which is shown to be effective for speech compression, by adding a convolutional GRU module in the decoder and feeding back the recurrent state to the encoder. Secondly,we apply a motion estimation network at the encoder side and enforce optical flow learning by using an explicit loss term during the initial stage of the training, which leads to better optical flow output at the decoder side and consequently much better rate-distortion performance. The proposed network architecture is compared with existing learned approaches through the lens of their underlying probabilistic models in Section 


We compare our method with state-of-the-art traditional codecs and learned approaches on the high resolution UVG [37] video dataset by plotting rate-distortion curves, with MS-SSIM [40] as a distortion metric. We show that we outperform all methods in the low to high rate regime, and more specifically in the 0.09-0.13 bits per-pixel (bpp) region, which is of practical interest for video streaming [17].

To summarize, our main contributions are as follows:

  1. We develop a simple feedback recurrent video compression architecture based on widely used building blocks. (Section 2).

  2. We study the differences and connections of existing learned video compression methods by detailing the underlying sequential latent variable models (Section 3).

  3. Our solution achieves state of the art rate-distortion performance when compared with other learned video compression approaches and performs competitively with or outperforms AVC (H.264) and HEVC (H.265) traditional codecs under equivalent settings (Section 4).

2 Methodology

2.1 Problem setup

Let us denote the image frames of a video as . Compression of the video is done by an autoencoder that maps , through an encoder , into compact discrete latent codes . The codes are then used by a decoder to form reconstructions .

We assume the use of an entropy coder together with a probabilistic model on , denoted as , to losslessly compress the discrete latents. The ideal codeword length can then be characterized as which we refer to as the rate term111For practical entropy coder, there is a constant overhead per block/stream, which is negligible with a large number of bits per stream and thus can be ignored. For example, for adaptive arithmetic coding (AAC), there is up to 2-bit inefficiency per stream [30]..

Given a distortion metric , the lossy compression problem can be formulated as the optimization of the following Lagrangian functional


where is the Lagrange multiplier that controls the balance of rate and distortion. It is known that this objective function is equivalent to the evidence lower bound in -VAE[15] when the encoder distribution is deterministic or has a fixed entropy. Hence is often called the prior distribution. We refer the reader to [12, 13, 36] for more detailed discussion. Throughout this work we use MS-SSIM [40] measured in RGB space as our distortion metric for both training and evaluation.

2.2 Overview of the proposed method

In our work, we focus on the problem of online compression of video using a causal autoencoder, i.e. one that outputs codes and a reconstruction for each frame on the fly without the need to access future context. In classic video codec terms, we are interested in the low delay P (LDP) setting where only I-frames222We refer the reader to [7], [35] and Section 2 of [43] for a good overview of frame structures in classic codecs.

(Intra-coded; independent of other frames) and P-frames (Predictive inter-coded; using previously reconstructed past but not future frames) are used. We do not make use of B-frames (Bidirectionally interpolated frames).

The full video sequence is broken down into groups of pictures (GoP) that starts with an I-frame and is followed by P-frames. We use a separate encoder, decoder and prior for the I-frames and P-frames. The rate term is then decomposed as


where a superscript is used to indicate frame type.

2.3 Network architecture

I-frame compression is equivalent to image compression and there are already many schemes available [4, 5, 28, 31, 36]. Here we applied the encoder and decoder design of [28] for I-frame compression. Subsequently, we only focus on the design of the P-frame compression network and discuss how previously transmitted information can be best utilized to reduce the amount of information needed to reconstruct subsequent frames.

2.3.1 Baseline architecture

One straightforward way to utilize history information is for the autoencoder to transmit only a motion vector (

e.g. block based or a dense optical field) and a residual while using the previously decoded frame(s) as a reference. At the decoder side, decoded motion vector (in our case optical flow) is used to warp previously decoded frame(s) using bilinear interpolation [20] (referred to later as warp function), which is then refined by the decoded residual. This framework serves as the basic building block for many conventional codecs such as AVC and HEVC. One instantiation of such framework built with autoencoders is illustrated in Fig. 1.

Figure 1: Base architecture of our video compression network. I-frame is compressed using a stand-alone image compression autoencoder. The subsequent P-frames are compressed with previous reconstructed frame as input and decoded with optical flow based motion compensation plus residual.

Here an autoencoder based image compression network, termed I-frame network, is used to compress and reconstruct the I-frame independent of any other frames. Subsequent P-frames

are processed with a separate autoencoder, termed P-frame network, that takes both the current input frame and the previous reconstructed frame as encoder inputs and produces the optical flow tensor

and the residual as decoder output. The architecture of our P-frame network’s encoder and decoder is the same as the I-frame. Two separate prior models are used for the entropy coding of the discrete latents in I-frame and P-frame networks. The frame is eventually reconstructed as .

For the P-frame network, the latent at each time contains information about . One source of inefficiency, then, is that the current time step’s may still exhibit temporal correlation with the past values , which is not exploited. This issue remains even if we consider multiple frames as references.

To explicitly equip the network with the capability to utilize the redundancy with respect to , we would need to expose as another input, besides , to both the encoder and the decoder for the operation at time step . In this case the latents would only need to carry information regarding the incremental difference between the flow field and residual between two consecutive steps, i.e. and , which would lead to a higher degree of compression. We could follow the same principle to utilize even higher order redundancies but it inevitably leads to a more complicated architecture design.

2.3.2 Feedback recurrent module

As we are trying to utilize higher order redundancy, we need to provide both the encoder and the decoder with a more suitable decoder history context. This observation motivates the use of a recurrentneural network that is meant to accumulate and summarize relevant information received previously by the decoder, and a decoder-to-encoder feedback connection that makes the recurrent state available at the encoder [45] – see Fig. 2(a). We refer to the added component as the feedback recurrent module.

Figure 2: (a)Video compression network with feedback recurrent module. Detailed network architecture for the encoder and decoder is described in Appendix 0.A (b) Encoder equipped with , an explicit optical flow estimation module. The switch between and is described at the end of Section 2.

This module can be viewed as a non-linear predictive coding scheme, where the decoder predicts the next frame based on the current latent as well as a summary of past latents. Because the recurrent state is available to both encoder and decoder, the encoder is aware of the state of knowledge of the decoder, enabling it to send only complementary information.

The concept of feedback recurrent connection is present in several existing works: In [45], the authors refer to an autoencoder with the feedback recurrent connection as FRAE and demonstrate its effectiveness in speech compression in comparison to different recurrent architectures. In [11, 10], the focus is on progressive coding of images, and the decoder to encoder feedback is adopted for an iterative refinement of the image canvas. In the domain of video compression, [32] proposes the concept of state-propagation, where the network maintains a state tensor that is available at both the encoder and the decoder and updated by summing it with the decoder output in each time step (Fig. 5 in [32]). In our network architecture, we propose the use of generic convolutional recurrent modules such as Conv-GRU [34] for the modeling of history information that is relevant for the prediction of the next frame. In Appendix 0.E, we show the necessity of such feedback connection from an information theoretical point of view.

2.3.3 Latent quantization and prior model

Given that the encoder needs to output discrete values, the quantization method applied needs to allow gradient based optimization. Two popular approaches are: (1) Define a learnable codebook, and use nearest neighbor quantization in the forward pass and a differentiable softmax in the backward pass [28, 12], or (2) Add uniform noise to the continuous valued encoder output during training and use hard quantization at evaluation [4, 5, 13]. For the second approach, the prior distribution is often characterized by a learnable monotonic CDF function

, where the probability mass of a single latent value

is evaluated as . In this work we adopt the first approach.

As illustrated in Fig. 2(a), a time-independent prior model is used. In other words, there is no conditioning of the prior model on the latents from previous time steps, and in Eq. (2) is factorized as . In Section 3 we show that such factorization does not limit the capability of our model to capture any empirical data distribution.

A gated PixelCNN [38] is used to model , with the detailed structure of the model built based on the description in Fig. 10 of [12]. The prior model for the I-frame network is a separate network with the same architecture.

2.3.4 Explicit optical flow estimation module

In the architecture shown in Fig. 2(a), the optical flow estimate tensor

is produced explicitly at the end of the decoder. When trained with the loss function

in Eq. (1), the learning of the optical flow estimation is only incentivized implicitly via how much that mechanism helps with the rate-effective reconstruction of the original frames. In this setting we empirically found the decoder was almost solely relying on the residuals to form the frame reconstructions. This observation is consistent with the observations of other works [27, 32]. This problem is usually addressed by using pre-training or more explicit supervision for the optical flow estimation task. We encourage reliance on both optical flow and residuals and facilitate optical flow estimation by (i) equipping the encoder with a U-Net [33] sub-network called Motion Estimation Network (), and (ii) introducing additional loss terms to explicitly encourage optical flow estimation.

estimates the optical flow between the current input frame and the previously reconstructed frame . Without , the encoder is provided with and , and is supposed to estimate the optical flow and the residuals and encode them, all in a single network. However, when attached to the encoder, provides the encoder directly with the estimated flow and the previous reconstruction warped by the estimated flow . In this scenario, all the encoder capacity is dedicated to the encoding task. A schematic view of this explicit architecture is shown in Fig. 2(b) and the implementation details are available in Appendix 0.A.

When is integrated with the architecture in Fig. 2(a), optical flow is originally estimated using denoted as and later reconstructed in the decoder denoted as . In order to alleviate the problem with optical flow learning using rate-distortion loss only, we incentivize the learning of and via two additional dedicated loss terms,

Hence the total loss we start the training with is where is the loss function as per Eq. (1). We found that it is sufficient if we apply the losses and only at the beginning of training, for the first few thousand iterations, and then we revert to using just and let the estimated flow be adapted to the main task, which has been shown to improve the results across variety of tasks utilizing optical flow estimation [44].

The loss terms and are defined as the distortion between and the warped version of . However, early in the training, is inaccurate and as a result, such distortion is not a good choice of a learning signal for the or the autoencoder. To alleviate this problem, early in the training we use instead of in both and . This transition is depicted in Fig. 2(b) with a switch. It is worth to mention that the tensor fed into the encoder is always a warped previous reconstruction , never a warped previous ground truth frame .

3 Graphical Model Analysis

Figure 3: Different combinations of sequential latent variable models and inference models proposed in literature. (a) without dashed lines is adopted in Lu et al. [27]. (a) with dashed lines is adopted in Liu et al. [26]. (b) describes both Rippel et al. [32] and our approach. (c) is used in Han et al. [13]. (d) is used in Habibian et al. [12]. The blue box around all

nodes means that they all come from a single joint distribution

and that we make no assumptions on the conditional structure of that distribution. See Appendix 0.F for detailed reasoning about these graphical models.

Recent success of autoencoder based learned image compression [4, 5, 28, 31, 36] (see [18] for an overview) has demonstrated neural networks’ capability in modeling spatial correlations from data and the effectiveness of end-to-end joint rate-distortion training. It has motivated the use of autoencoder based solution to further capture the temporal correlation in the application of video compression and there has since been many designs for learned video compression algorithms [45, 32, 27, 26, 12, 13]. These designs differ in many detailed aspects: the technique used for latent quantization; the specific instantiation of encoder and decoder architecture; the use of atypical, often domain-specific operations beyond convolution such as Generalized Divisive Normalization (GDN) [3, 4] or Non-Local Attention Module (NLAM) [25]. Here we leave out the comparison of those aspects and focus on one key design element: the graphical model structure of the underlying sequential latent variable model [23].

As we have briefly discussed in Section 2.1, the rate-distortion training objective has strong connections to amortized variational inference. In the special case of , the optimization problem is equivalent to the minimization of [2] where describes a sequential generative process and can be viewed as a sequential inference process. denotes the decoder distribution induced by our distortion metric (see section 4.2 of [12]), denotes the empirical distribution of the training data, and denotes the encoder distribution induced on the latents, in our case a Dirac- distribution because of the deterministic mapping from the encoder inputs to the value of the latents.

To favor the minimization of the KL divergence, we want to design , , to be flexible enough so that (i) the marginalized data distribution is able to capture complex temporal dependencies in the data, and (ii) the inference process encodes all the dependencies embedded in [41]. In Fig. 3 we highlight four combinations of sequential latent variable models and the inference models proposed in literature.

Fig. 3(a) without the dashed line describes the scheme proposed in Lu et al. [27] (and the low latency mode of HEVC/AVC) where prior model is fully factorized over time and each decoded frame is a function of the previous decoded frame and the current latent code . There are two major limitations of this approach: firstly, the marginalized sequential distribution is confined to follow Markov property ; secondly, it assumes that the differences in subsequent frames (e.g. optical flow and residual) are independent across time steps. To overcome these two limitations, Liu et al. [26] propose an auto-regressive prior model through the use of ConvLSTM over time, which corresponds to Fig. 3(a) including dashed lines. In this case, the marginal is fully flexible in the sense that it does not make any conditional independence assumptions between , see Appendix 0.F.1 for details.

Fig. 3(b) describes another way to construct a fully flexible marginalized sequential data distribution, by having the decoded frame depend on all previously transmitted latent codes. Both Rippel et al. [32] and our approach fall under this category by introducing a recurrent state in the decoder which summarizes information from the previously transmitted latent codes and processed features. Rippel et al. [32] use a multi-scale state with a complex state propagation mechanism, whereas we use a convolutional recurrency module (ConvGRU) [34]. In this case, is also fully flexible, details in Appendix 0.F.1.

The latent variable models in Fig. 3(c) and (d) break the causality constraint, i.e. the encoding of a GoP can only start when all of its frames are available. In (c), a global latent is used to capture time-invariant features, which is adopted in Han et al. [13]. In (d), the graphical model is fully connected between each frame and latent across different time steps, which described the scheme applied in Habibian et al. [12].

One advantage of our approach (as well as Rippel et al. [32]), which corresponds to Fig. 3(b), is that the prior model can be fully factorized across time, , without compromising the flexibility of the marginal distribution . Factorized prior allows parallel entropy decoding of the latent codes across time steps (but potentially still sequential across dimensions of within each time step). On the inference side, in Appendix 0.E we show that any connection from to , which could be modeled by adding a recurrent component on the encoder side, is not necessary.

4 Experiments

4.1 Training setup


Our training dataset was based on Kinetics400 [21]. We selected a subset of the high-quality videos in Kinetics (width and height over 720 pixels), took the first 16 frames of each video, and downscaled them (to remove compression artifacts) such that the smaller of the dimensions would be 256 pixels. At train time, random crops of size were used. For validation, we used videos from HDgreetings, NTIA/ITS and SVT.

We used UVG [37] as our test dataset. UVG contains 7 p video sequences and a total of 3900 frames. The raw UVG p videos are available in YUV420-8bit format and we converted them to RGB using OpenCV [6] for evaluation.


All implementations were done in the PyTorch framework

[29]. The distortion measure MS-SSIM [40] was normalized per frame and the rate term was normalized per pixel (i.e., bits per pixel) before they were combined . We trained our network with five values to obtain a rate-distortion curve. The I-frame network and P-frame network were trained jointly from scratch without pretraining for any part of the network.

The GoP size of was used during training, i.e.

the recurrent P-frame network was unrolled 7 times. We used a batch size of 16, for a total of 250k gradient updates (iterations) corresponding to 20 epochs. When training with the flow enhancement network

, we applied and until 20k iterations, and the transition of the input from to was done at iteration 15k. All models were trained using Adam optimizer [22] with the initial learning rate of , and . The learning rate was annealed by a factor of 0.8 every 100k iterations. Performance on the validation set was evaluated every epoch, results presented below are the model checkpoints performing best on the validation set. See Appendix 0.A for details on the architecture and training.

4.2 Comparison with other methods

In this section, we compare the performance of our method with existing learned approaches as well as popular classic codecs.

Comparison with learning-based methods

In Fig. 4(a), we compare the performance of our solution with several learned approaches on UVG p dataset in terms of MS-SSIM versus bitrate where MS-SSIM was first averaged per video and then averaged across different videos. Our method outperforms all the compared ones in MS-SSIM across the range of bitrate between around bpp to around bpp. Lu et al. [27] and Liu et al. [26] are both causal solutions and they are evaluated with GoP size of 12. Habibian et al. and Wu et al. are non-causal solutions and their results are reported based on GoP sizes of 8 and 12, respectively. Our results are evaluated at GoP of 8, same as in training.

Figure 4: (a) Comparison to the state-of-the-art learned methods. (b) Comparison with classic codecs. Both comparisons are on UVG p dataset.

Comparison with traditional video codecs

We compared our method with the most popular standard codecs i.e. H.265 [35] and H.264 [42] on UVG p dataset. The results are generated with three different sets of settings (more details are provided in Appendix 0.C):

  • ffmpeg [8] implementation of H.264 and H.265 in low latency mode with GoP size of 12.

  • ffmpeg [8] implementation of H.264 and H.265 in default mode.

  • HM [16] implementation of H.265 in low latency mode with GoP size of 12.

The low latency mode was enforced to H.264 and H.265 to make the problem settings the same as our causal model. GoP size 12 on the other hand, although different from our GoP size 8, was consistent with the settings reported in other papers and provided H.264 and H.265 an advantage as they perform better with larger GoP sizes.

The comparisons are shown in Fig. 4(b) in terms of MS-SSIM versus bitrate where MS-SSIM was calculated in RGB domain. As can be seen from this figure, our model outperforms the HM implementation of H.265 and the ffmpeg implementation of H.265 and H.264 in both low latency and default settings, at bitrates above . For 1080p resolution this is the range of bitrates of interest for practitioners, e.g. Netflix uses the range of about for their 1080p resolution video streaming [17].

Qualitative comparison

In Fig. 5 we compare the visual quality of our lowest rate model with ffmpeg implementation of H.265 at in low latency mode a similar rate. We can see that our result is free from the blocking artifact usually present in H.265 at low bitrate – see around the edge of fingers – and preserves more detailed texture – see structures of hand veins and strips on the coat. In Appendix 0.D, we provide more examples with detailed error and bitrate maps.

Figure 5: An illustration of qualitative characteristics of our architecture versus H.265 (ffmpeg) at a comparable bitrate. (a) and (b) are the original frame and its closeup, (c) and (d) are the closeup on the reconstructed frames from our method and H.265. To make the comparison fair, we used HEVC with fixed GoP setting (min-keyint=8:scenecut=0) at a similar rate, so both methods are at equal bpp and show the 4th P-frame of a GoP of 8. Frame 229 of Tango video from Netflix Tango in Netflix El Fuente; see Appendix 0.B for license information.
Figure 6: An example of the estimated optical flow, left to right: two consecutive frames used for estimation, FlowNet2 results, our decoder output . Note that the flow produced by our model is decoded from a compressed latent, and is trained to maximize total compression performance rather than warping accuracy. See Appendix 0.B for license information.
Figure 7: Average MS-SSIM and rate as a function of frame-index in a GoP of 8.

As noted in Section 2.3.4, the dedicated optical flow enforcement loss terms were removed at a certain point and the training continued using only. As a result, our network learned a form of optical flow that contributed maximally to and did not necessarily follow the ground truth optical flow (if such were available). Fig. 6 compares an instance of the optical flow learned in our network with the corresponding optical flow generated using a pre-trained FlowNet2 network [19]. The optical flow reconstructed by the decoder has larger proportion of values set to zero than the flow estimated by FlowNet2 – this is consistent with the observations by Lu et al. [27] in their Fig. 7 where they argue that it allows the flow to be more compressible.

4.3 Ablation study

Figure 8: Ablation study on UVG dataset. All models were evaluated at GoP of size 8. Model (a) is obtained by removing the decoder to encoder feedback connection in Fig. 2(a). Model (b) removes the the feedback recurrent module. Model (c) removes the explicit optical flow estimation module in Fig. 2(b).

To understand the effectiveness of different components in our design, in Fig. 8 we compare the performance after removing (a) decoder to encoder recurrent feedback, or (b) the feedback recurrent module in Fig. 2(a), or (c) the explicit optical flow estimation module. We focus the comparison on low rate regime where the difference is more noticeable.

Empirically we find the explicit optical flow estimation component to be quite essential – without it the optical flow output from the decoder has very small magnitude and is barely useful, resulting in consistent loss of at least 0.002 in MS-SSIM score for rate below 0.16 bpp. In comparison, the loss in performance after removing either the decoder to encoder feedback or the recurrent connection all together is minor and only show up at very low rate region below 0.1 bpp.

Similar comparisons have been done in existing literature. Rippel et al. [32] report large drop in performance when removing the learned state in their model (Fig. 9 of [32]; about 0.005 drop in MS-SSIM at bpp of 0.05), while Liu et al. [26], which utilizes recurrent prior to exploit longer range temporal redundancies, reports relatively smaller degradation after removing their recurrent module (about 0.002 drop in MS-SSIM in Fig. 9 of [26]).

We suspect that the lack of gain we observe from the feedback recurrent module is due to both insufficient recurrent module capacity and receptive field, and plan to investigate further as one of our future studies.

4.4 Empirical observations

4.4.1 Temporal consistency

We found, upon visual inspection, that for our low bitrate models, the decoded video displays a minor yet noticeable flickering artifact occurring at the GoP boundary, which we attribute to two factors.

Firstly, as shown in Fig. 7, there is a gradual degradation of P-frame quality as it moves further away from the I-frame. It is also apparent from Fig. 7 that the rate allocation tends to decline as P-frame network unrolls, which compounds the degradation of P-frame quality. This shows that the end-to-end training of frame-averaged rate-distortion loss does not favor temporal consistency of frame quality, which makes it difficult to apply the model with larger GoP sizes and motivates the use of temporal-consistency related losses [9, 24].

Second part of the problem is due to the nature of the MS-SSIM metric and hence is not reflected in Fig. 7. This aspect is described in the following section.

4.4.2 Color shift

Another empirical observation is that the MS-SSIM metric seems to be partially invariant to uniform color shift in flat regions and due to that the low bitrate models we have trained are prone to exhibit slight color shift, similar to what has been reported in [46]. This color change is sometimes noticeable during I-frame to P-frame transitioning and contributes to the flickering artifact. One item for future study is to find suitable metrics or combination of metrics that captures both texture similarity and color fidelity. See Appendix 0.G for details.

5 Conclusion

In this paper, we proposed a new autoencoder based learned lossy video compression solution featuring a feedback recurrent module and an explicit optical flow estimation module. It achieves state of the art performance among learned video compression methods and delivers comparable or better rate-distortion when compared with classic codecs in low latency mode. In future work, we plan to improve its temporal consistency, solve the color fidelity issue, and reduce computational complexity, paving the way to a practical real-time learned video codec.


  • [1] 2019 global internet phenomena report. Note: 2020-02-28 Cited by: §1.
  • [2] A. A. Alemi, B. Poole, I. Fischer, J. V. Dillon, R. A. Saurous, and K. Murphy (2018) Fixing a Broken ELBO. In ICML, Cited by: §3.
  • [3] J. Ballé, V. Laparra, and E. P. Simoncelli (2016) Density Modeling of Images using a Generalized Normalization Transformation. In iclr, Cited by: §3.
  • [4] J. Ballé, V. Laparra, and E. P. Simoncelli (2017) End-to-End Optimized Image Compression. In iclr, Cited by: §2.3.3, §2.3, §3.
  • [5] J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston (2018)

    Variational Image Compression with a Scale Hyperprior

    In iclr, Cited by: §2.3.3, §2.3, §3.
  • [6] G. Bradski (2000) The OpenCV Library. Dr. Dobb’s Journal of Software Tools. Cited by: §0.C.1, §4.1.
  • [7] Digital video introduction. Note: 2020-03-02 Cited by: footnote 2.
  • [8] Ffmpeg. Note: 2020-02-21 Cited by: 1st item, 2nd item.
  • [9] C. Gao, D. Gu, F. Zhang, and Y. Yu (2019) ReCoNet: Real-Time Coherent Video Style Transfer Network. In accv, Cited by: §4.4.1.
  • [10] K. Gregor, F. Besse, D. J. Rezende, I. Danihelka, and D. Wierstra (2016) Towards Conceptual Compression. In nips, Cited by: Appendix 0.E, §2.3.2.
  • [11] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra (2015)

    DRAW: A Recurrent Neural Network For Image Generation

    In icml, Cited by: Appendix 0.E, §2.3.2.
  • [12] A. Habibian, T. van Rozendaal, J. M. Tomczak, and T. S. Cohen (2019) Video Compression With Rate-Distortion Autoencoders. In iccv, Cited by: Figure 9, Appendix 0.A, Appendix 0.F, §1, §1, §2.1, §2.3.3, §2.3.3, Figure 3, §3, §3, §3.
  • [13] J. Han, S. Lombardo, C. Schroers, and S. Mandt (2019) Deep Probabilistic Video Compression. In nips, Cited by: Appendix 0.F, §1, §2.1, §2.3.3, Figure 3, §3, §3.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep Residual Learning for Image Recognition. In cvpr, Cited by: §1.
  • [15] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner (2017) beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In iclr, Cited by: §2.1.
  • [16] High Efficiency Video Coding (HEVC). Note: 2020-02-21 Cited by: 3rd item.
  • [17] How much data does Netflix use?. Note: 2020-02-28 Cited by: §1, §4.2.
  • [18] Y. Hu, W. Yang, Z. Ma, and J. Liu (2020) Learning end-to-end lossy image compression: a benchmark. arXiv:2002.03711. Cited by: §3.
  • [19] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. In cvpr, Cited by: §4.2.
  • [20] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu (2015) Spatial Transformer Networks. In nips, Cited by: §2.3.1.
  • [21] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman (2017) The Kinetics Human Action Video Dataset. arxiv:1705.06950. Cited by: §4.1.
  • [22] D. Kingma and J. Ba (2015) Adam: A Method for Stochastic Optimization. In iclr, Cited by: §4.1.
  • [23] D. Koller and N. Friedman (2009) Probabilistic graphical models: principles and techniques. MIT Press. External Links: ISBN 9780262013192, LCCN 2009008615 Cited by: §0.F.1, §3.
  • [24] W. Lai, J. Huang, O. Wang, E. Shechtman, E. Yumer, and M. Yang (2018) Learning Blind Video Temporal Consistency. In eccv, Cited by: §4.4.1.
  • [25] H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma (2019) Non-local Attention Optimized Deep Image Compression. arXiv:1904.09757. Cited by: §3.
  • [26] H. Liu, H. shen, L. Huang, M. Lu, T. Chen, and Z. Ma (2020) Learned Video Compression via Joint Spatial-Temporal Correlation Exploration. In aaai, Cited by: §0.F.1, Appendix 0.F, §1, Figure 3, §3, §3, §4.2, §4.3.
  • [27] G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao (2019) DVC: An End-to-End Deep Video Compression Framework. In cvpr, Cited by: §1, §2.3.4, Figure 3, §3, §3, §4.2, §4.2.
  • [28] F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool (2018) Conditional Probability Models for Deep Image Compression. In cvpr, Cited by: Figure 9, §2.3.3, §2.3, §3.
  • [29] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: An Imperative Style, High-Performance Deep Learning Library. In nips, Cited by: §4.1.
  • [30] W. A. Pearlman and A. Said (2011) Digital Signal Compression: Principles and Practice. Cambridge University Press. Cited by: footnote 1.
  • [31] O. Rippel and L. Bourdev (2017) Real-time adaptive image compression. In icml, Cited by: §2.3, §3.
  • [32] O. Rippel, S. Nair, C. Lew, S. Branson, A. G. Anderson, and L. Bourdev (2019) Learned Video Compression. In iccv, Cited by: Appendix 0.F, §1, §2.3.2, §2.3.4, Figure 3, §3, §3, §3, §4.3.
  • [33] O. Ronneberger, P. Fischer, and T. Brox (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In miccai, Cited by: §2.3.4.
  • [34] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo (2015)

    Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting

    In nips, Cited by: §1, §2.3.2, §3.
  • [35] G. J. Sullivan, J. R. Ohm, W. J. Han, and T. Wiegand (2012-12) Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE Trans. Circuits Syst. Video Technol. 22 (12), pp. 1649–1668. External Links: ISSN 1051-8215 Cited by: §4.2, footnote 2.
  • [36] L. Theis, W. Shi, A. Cunningham, and F. Huszár (2017) Lossy Image Compression with Compressive Autoencoders. In iclr, Cited by: §2.1, §2.3, §3.
  • [37] Ultra Video Group test sequences. Note: 2020-02-21 Cited by: §1, §4.1.
  • [38] A. van den Oord, N. Kalchbrenner, L. Espeholt, K. Kavukcuoglu, O. Vinyals, and A. Graves (2016) Conditional Image Generation with PixelCNN Decoders. In nips, Cited by: §1, §2.3.3.
  • [39] Z. Wang and A. C. Bovik (2009-01) Mean Squared Error: Love it or leave it? A new look at Signal Fidelity Measures. In IEEE Signal Processing Magazine, Vol. 26, pp. 98–117. External Links: ISSN 1558-0792 Cited by: §0.G.1.
  • [40] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing 13 (4), pp. 600–612. Cited by: §0.G.1, §1, §2.1, §4.1.
  • [41] S. Webb, A. Goliński, R. Zinkov, S. N, T. Rainforth, Y. W. Teh, and F. Wood (2018) Faithful Inversion of Generative Models for Effective Amortized Inference. In nips, Cited by: §3.
  • [42] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra (2003-07) Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology 13 (7), pp. 560–576. External Links: ISSN 1558-2205 Cited by: §4.2.
  • [43] C. Wu, N. Singhal, and P. Krähenbühl (2018) Video Compression through Image Interpolation. In eccv, Cited by: Appendix 0.F, §1, footnote 2.
  • [44] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman (2019-02) Video Enhancement with Task-Oriented Flow. ijcv 127 (8). Cited by: §2.3.4.
  • [45] Y. Yang, G. Sautière, J. J. Ryu, and T. S. Cohen (2019) Feedback Recurrent AutoEncoder. In ICASSP, Cited by: §1, §2.3.2, §2.3.2, §3.
  • [46] H. Zhao, O. Gallo, I. Frosio, and J. Kautz (2017) Loss Functions for Image Restoration With Neural Networks. In IEEE Transactions on Computational Imaging, Cited by: §0.G.1, §4.4.2.

Appendix 0.A Model and training details

Detailed architecture of the encoder and decoder in Fig. 2(a) is illustrated in Fig. 9. For prior modeling, we use the same network architecture as described in Section B.2 of [12]. Detailed architecture of the optical flow estimation network in Fig. 2(b) is illustrated in Fig. 10.

Figure 9: Architecture of our P-frame network (flow enhanced module is separately illustrated). Tconv denotes transposed convolution. For (transposed) convolutional layer denotes the number of output channels, denotes the kernel size and

denotes the stride. In all of our experiments, we set

, , and . Architecture of I-frame network differs by removing the final two layers in the decoder, setting , and having as the only input to the encoder. Note that a large part of this architecture is inherited from Fig. 2 in [28] and Fig. 9 in [12].
Figure 10: Architecture of our optical flow estimation network. In all of our experiments, we set .

As can be seen from Fig. 9, we use BatchNorm throughout the encoder and decoder architecture to help stabilize training. However, since previous decoded frame is fed back into the encoder in the next time step, during training it is not possible to apply the same normalization across all the unrolled time steps, which will lead to an inconsistency across training and evaluation. To solve this issue, we switch BatchNorm to evaluation mode during training at iteration 40k, after which its parameters are no longer updated.

Appendix 0.B Images used in figures

Video used in Figures 6,  5,  13,  12, 15, and 16 is produced by Netflix, with CC BY-NC-ND 4.0 license:

Appendix 0.C ffmpeg and Hm experiments

We generated H.264 and H.265 baselines using ffmpeg and HM software tools. Both HM and ffmpeg received the UVG videos in the native YUV-1080p-8bit format as inputs. The command that we used to run ffmpeg in low-latency mode is as follows:

ffmpeg -y -pix_fmt yuv420p -s [W]x[H] -r [FR] -i [IN].yuv
-c:v libx[ENC] -b:v [RATE]M -maxrate [RATE]M -tune zerolatency
-x[ENC]-params "keyint=[GOP]:min-keyint=[GOP]:verbose=1" [OUT].mkv

and the command that we used to run ffmpeg in default settings is as follows:

ffmpeg -y -pix_fmt yuv420p -s [W]x[H] -r [FR] -i [IN].yuv
-c:v libx[ENC] -b:v [RATE]M -maxrate [RATE]M
-x[ENC]-params "verbose=1" [OUT].mkv

where the values in brackets represent the encoder parameters as follows: H and W are the frame dimensions (), FR is the frame rate (120), ENC is the encoder type (x264 or x265), GOP is the GoP size (12 for low-latency settings), INPUT and OUTPUT are the input and the output filenames, respectively, RATE controls the intended bit rate in Mega-bits/second (We tried RATE Mega-bits/second that translate to bits/pixel for UVG 1080p at 120 fps). It is worth to mention that we did not set the -preset flag in ffmpeg that controls the coding performance. As a result, it used the default value which is medium, unlike some existing papers that employed ffmpeg in fast or superfast presets that led to poor ffmpeg performance.

The command we used to run HM is as follows:

TAppEncoderStatic -c [CONFIG].cfg -i INPUT.yuv -wdt [W] -hgt [H]
-fr [FR] -f [LEN] -o OUTPUT.yuv -b -ip [GOP] -q [QP] > [LOG].log

where CONFIG is the HM configuration file (we used LowDelayP.cfg), LEN is the number of frames in the video sequence (600 or 300 for UVG videos), LOG is the log file name that we used to read the bit rate values, and QP control the quality of the encoded video that leads to different bit rates (we tried QP in this work).

MS-SSIM for all the experiments was calculated in RGB domain in video-level. bit rates for ffmpeg were calculated by reading the compressed file size and for HM by parsing the HM log file. Both values were then normalized by frame-dimensions and frame-rate.

0.c.1 YUV to RGB conversion inconsistencies

Since the UVG raw frames are available in YUV format and our model works in RGB domain only, we had to convert the UVG frames from YUV to RGB color space. We considered two methods to convert UVG-1080p-8bit videos to RGB , i) use ffmpeg to read in YUV format and directly save the frames in RGB format, ii) read the frames in YUV and convert them to RGB via COLOR_YUV2BGR_I420 functionality provided by the OpenCV [6] package. We also tried a third scenario, iii) use ffmpeg to read UVG-4K-10bit in YUV format and save the RGB frames. Fig. 11 shows the performance of our model on the three different versions of UVG RGB. As can be seen from this figure, there is an inconsistency in our model performance across different UVG versions. A potential cause for this behavior could be the interpolation technique employed in the color conversion method, since in YUV420 format U and V channels are subsampled and need to be upsampled before YUV to RGB conversion. The results reported in this paper are based on the OpenCV color conversion.

Figure 11: Effect of conversion method on model performance on UVG dataset.

Appendix 0.D Qualitative examples

We provide in this section more qualitative examples for our method in Fig. 12 and for H.265 in Fig. 13. Notably, we overlay the MS-SSIM and MSE error maps on the original frame to provide indication of the distortion characteristics of each algorithm. In addition in Fig. 12, we show the BPP maps, as we have access to the entropy of the corresponding latent using the I-frame and P-frame prior models. We average the entropy across channels, then use bilinear upsampling to bring the BPP map back to the original image resolution.

By examining the BPP maps in Fig. 12 (third row), we can see that intuitively the model spends bits more uniformly across I-frames (first and last column), while during P-frames bits are mostly spent around edges of moving objects.

Note that our method optimizes for MS-SSIM, while traditional video codecs like H.265 are evaluated with PSNR. As detailed in Appendix 0.G, MS-SSIM is insensitive to uniform color shifts and mainly focuses on spatial structure. On the contrary, PSNR does pick up color shifts, but it tends to care less for fine details in spatial structure, i.e. texture, sharp edges. Our method’s MS-SSIM maps (Fig. 12 4th row) show edges are sharply reconstructed, yet does not fully capture some of the color shifts in the reconstruction. In particular the man in the pink suit has a consistently dimmer pink suit in the reconstruction (compare first and second row), which does not appear in the MS-SSIM loss, while MSE seems to pick up the shift. In contrast in Fig. 13, H.265 MSE maps (last row) shows that the error is concentrated around most edges and fine textures due to blurriness, yet most colors are on target.

We observe that for our method, MS-SSIM error maps degrade as we go from I-frame to P-frame (compare first and second to last column in Fig. 12). Similarly, the BPP map of the first I-frame show a large rate expenditure, while consecutive P-frames display significantly less rate. Both of these qualitative results confirm the quantitative analysis of Fig. 7.

Figure 12: Our method’s qualitative evaluation on frames from Netflix Tango in Netflix El Fuente. The rows show the original frames, the reconstructed frames, then the BPP, MS-SSSIM and MSE maps overlayed on top of the grayscale original frames. Each column shows a different frame from a full cycle of GoP 8 going from I-frame to I-frame, showing every other frame.
Figure 13: H.265 qualitative evaluation on frames from Netflix Tango in Netflix El Fuente. The rows show the original frames, the reconstructed frames, then the MS-SSSIM and MSE maps overlayed on top of the grayscale original frames. Each column shows a different frame from a full cycle of GoP 8 going from I-frame to I-frame, showing every other frame.

Appendix 0.E Theoretical justification of the feedback recurrent module

In this subsection we provide theoretical motivations on having feedback recurrency module in our network design. Specifically, we show that, for the compression of sequential data with a causal decoder and causal encoder, (i) it is imperative for encoder to have a memory that summarizes previously latent codes, and (ii) it is not beneficial for encoder to access history input information.

Below is an abstract view of a generic sequential autoencoder, where denotes a discrete-time random process that represents the empirical distribution of our training video data, with and being the induced latent and reconstruction processes.

From an information theoretical point of view, the sequential autoencoding problem can be abstracted as the maximization of average frame-wise mutual information between and , detailed below, with certain rate constraint on ,

frame-wise mutual information333Note that this is different from . If we instead maximize , then we could have a single output frame capturing the information of more than one input frames, which is not what we want.
s.t. decoder causality

The decoder causality is encoded in such a form as is zero if and only if is not a function of . It is important to note that mutual information is invariant to any bijections and thus would not reflect perceptual quality of reconstruction. Nevertheless, we use this formulation only to figure out important data dependencies from an information theory perspective and use that to guide us in the design of network architecture. With a closer look, the term in the objective function can be rewritten as below.


Step (a) incorporates the decoder causality constraint, step (b) comes from the definition of multi-variate mutual information444. For a Markov process , this indicates the amount of information flown from to through ., and the rest uses the identity of conditional mutual information555 for any ..

Equation (3) says that can be broken down into two terms, the first represents the prediction of from all previous latents , and the second represents new information in about that is not be explained by . While this formulation does not indicate how the optimization can be done, it tells us what variables should be incorporated in the design of . Let us focus on : in our objective function , shows up in the following terms

It is clear, then, that the optimal should only be a function of , , and . Combining it with the constraint that the encoder is causal, we can further limit the dependency to and . In other words, it suffices to parameterize the encoder function as , which attests to the two claims at the beginning of this subsection: (i) should be a function of and (ii) does not need to depend on .

It is with these two claims that we designed the network where the decoder recurrent state, which provides a summary of , is fed back666Since is a deterministic function of , any additional input of to the encoder is also justified. to the encoder as input at time step and there is no additional input related to .

It is worth noting that in [11, 10], the authors introduce a neural network architecture for progressive coding of images, and in it an encoder recurrent connection is added on top of the feedback recurrency connection from the decoder. Based on the analysis in this section, since the optimization of does not depend on , we do not include encoder recurrency in our network design.

Appendix 0.F Graphical modeling considerations

In this section we give more details on the reasoning behind how we formulated the graphical models presented in Fig. 3.

Wu et al. [43]
Generative model. Temporally independent prior is used hence no edges between the latent variables . Consecutive reconstructions depend on the previous reconstructions and hence the edge between them .
Inference model. During inference, at timestep , the current timestep’s latent is inferred based on the previous reconstruction and current original frame . In turn, the previous reconstruction is determined based on the information from the previous timestep’s latent and the earlier reconstruction . Since the procedure of determining from and is deterministic, the inference model has an edge . Extending this point, since is updated at each timestep using a deterministic procedure using its previous value and , it makes dependent on all the past latent codes . Hence there are edges from all the past latents to the present latent .
The edge arises from the direct dependence of on through the encoder.

Liu et al. [26]
Generative model. Due to the introduction of the time-autoregressive code prior there are additional edges as compared to model (a) with dashed edges.
Inference model. The inference model remains unchanged.

Rippel et al. [32] and ours
Generative model. Model (b) differs from model (a) in the introduction of the recurrent connection and hence hidden state , hence the full graphical model could be drawn as per Fig. 14. However, the

is not a stochastic random variable, but a result of a deterministic function of

and . For that reason we can drop explicitly drawing nodes and move the deterministic relationship they are implying into the graphical model edges between the latent variables . This way we arrive from the models in Fig. 14 to models in Fig. 3(b).
Inference model. Analogous reasoning applies in the case of the inference model.

Figure 14: Left: Generative model of our method with intermediate variable . Right: Inference model of our method with intermediate variable .

Han et al. [13]
Graphical models are derived based on equations by Han et al. [13]: Eq. (4) and Eq. (6) for the generative model, and Eq. (5) for the inference model.

Habiban et al. [12]
Generative model. Due to the use of a prior which is autoregressive in the temporal dimension (across the blocks of 8 frames that Habiban et al. are using) we draw edges . The latent codes are generated by a deterministic composition of 3D convolutional layers with the input of the original frames. The temporal receptive field is larger than original input sequence which means that every latent variable in the block can be influenced by each of the original input frames and hence the fully connected clique between and .
Inference model. The same reasoning applies for the inference model.

0.f.1 Full flexibility of the marginal

The graphical model corresponding to the approach of Liu et al. [26] is presented in Fig. 3(a) including dashed lines. In this case, the marginal is fully flexible in the sense that it does not make any conditional independence assumptions between .

To see that consider considering what happens when we marginalize out from using the variable elimination algorithm [23]. Assume we use an elimination order which is permutation of a set of numbers , e.g. for a viable permutation is . Let’s think about the process of variable elimination in terms of the induced graph. Eliminating the first variable induces a connection between and each of the other latent variables . Eliminating consecutive latent variables will induce connections between corresponding observed variables and every other . This way the final latent variable to be eliminated will be connected in the induced graph with all the observed variables . Hence when we eliminate that will result in a factor (a clique between all nodes thinking in terms 2of the induced graph). The marginal distribution we are looking for is , and it makes no conditional independence assumptions.

The graphical model corresponding to our and Rippel et al. approach is presented in Fig. 3(b). In this case, showing that the marginal makes no conditional independence assumptions between is simpler – no matter what variable elimination order we choose, since is connected to all the nodes eliminating will always result in a factor .

Appendix 0.G MS-SSIM color issue and corner artifact

0.g.1 MS-SSIM color shift artifact

MS-SSIM, introduced in [40], is known to be to some extent invariant to color shift [46]. Inspired by the study in Fig. 4 of [39], we designed a small experiment to study the extent of this issue. We disturbed an image slightly with additive Gaussian noise to obtain , such that lies on a certain equal-MS-SSIM hypersphere (we choose 0.966 which corresponds to the average quality of our second lowest rate model). Then we optimize for lower PSNR, under the constraint of equal MS-SSIM using the following objective:


We use Adam optimizer with a learning rate of (otherwise default parameters) for iterations. We use , with representing the iteration index.

In Fig. 15, (a) corresponds to , (b) corresponds to , (c) to at iteration and (d) to at iteration . All frame (b), (c) and (d) are on the equal-MS-SSSIM hypersphere of 0.966 with respect to the original frame . We can observe that it is possible to obtain an image with very low PSNR, mainly due to drastic uniform color shifts, while remaining at equal distance in terms of MS-SSIM from the original image. This helps explain some of the color artifacts we have seen in the output of our models.

Figure 15: An illustration of MS-SSIM color issue. All frames (b), (c) and (d) have an MS-SSIM of 0.966. (a): original frame, (b): perturbed image with slight additive gaussian noise, (c) and (d): images obtained by optimizing (b) for under equal MS-SSIM contraint.
Crop of frame 229 of Tango video from Netflix Tango in Netflix El Fuente; see Appendix 0.B for license information.

0.g.2 MS-SSIM corner artifact

The standard implementation of MS-SSIM, when used in , causes corner artifacts as shown in Figure 16

. Although the artifact is subtle and barely noticeable when looking at full-size video frames, it is worth investigation. We found the root cause in the convolution-based implementation of local mean calculations in SSIM function where a Gaussian kernel (normally of size 11) is convolved with the input image. The convolution is done without padding, as a result the contribution of the corner pixels to MS-SSIM is minimal as they fall at the tails of the Gaussian kernel. This fact is not problematic when MS-SSIM is used for evaluation, but can negatively affect training. Specifically in our case, where MS-SSIM was utilized in conjunction with rate to train the network, the network took advantage of this deficiency and assigned less bits to the corner pixels as they are less important to the distortion term. As a result, the corner pixels appeared blank in the decoded frames. We fixed this issue by padding,

e.g. replicate, the image before convolving it with the Gaussian kernel. The artifact was eliminated as can be seen from Fig. 16.

It is worth to mention that we used the standard MS-SSIM implementation in all the trainings and evaluations in this paper for consistency with other works.

Figure 16: An illustration of the MS-SSIM corner artifact, best viewed on screen. Left: uncompressed frame, middle: corners when trained with default MS-SSIM implementation, right. corners when trained with replicate-padded MS-SSIM implementation.
Frame 10 of Tango video from Netflix Tango in Netflix El Fuente; see Appendix 0.B for license information.