1 Introduction
With over 60% of internet traffic consisting of video [1], lossy video compression is a critically important problem, promising reductions in bandwidth, storage, and generally increasing the scalability of the internet. Although the relation between probabilistic modelling and compression has been known since Shannon, video codecs in use today are only to a very small extent based on learning and are not endtoend optimized for ratedistortion performance on a large and representative video dataset.
The last few years have seen a surge in interest in novel codec designs based on deep learning [27, 43, 32, 12, 26, 13], which are trained endtoend to optimize ratedistortion performance on a large video dataset, and a large number of network designs with various novel, often domainspecific components have been proposed. In this paper we show that a relatively simple design based on standard, well understood and highly optimized components such as residual blocks [14], convolutional recurrent networks [34], optical flow warping and a PixelCNN [38] prior yields state of the art ratedistortion performance.
We focus on the online compression setting, where video frames can be compressed and transmitted as soon as they are recorded (in contrast to approaches which require buffering several frames before encoding), which is necessary for applications such as video conferencing and cloud gaming. Additionally, in both applications, the ability to finetune the neural codec to its specific content holds promise to further significantly reduce the required bandwidth [12].
There are two key components to our approach beyond the residual block based encoder and decoder architecture. Firstly, to exploit long range temporal correlation, we follow the principle proposed in Feedback Recurrent AutoEncoder (FRAE) [45]
, which is shown to be effective for speech compression, by adding a convolutional GRU module in the decoder and feeding back the recurrent state to the encoder. Secondly,we apply a motion estimation network at the encoder side and enforce optical flow learning by using an explicit loss term during the initial stage of the training, which leads to better optical flow output at the decoder side and consequently much better ratedistortion performance. The proposed network architecture is compared with existing learned approaches through the lens of their underlying probabilistic models in Section
3.We compare our method with stateoftheart traditional codecs and learned approaches on the high resolution UVG [37] video dataset by plotting ratedistortion curves, with MSSSIM [40] as a distortion metric. We show that we outperform all methods in the low to high rate regime, and more specifically in the 0.090.13 bits perpixel (bpp) region, which is of practical interest for video streaming [17].
To summarize, our main contributions are as follows:

We develop a simple feedback recurrent video compression architecture based on widely used building blocks. (Section 2).

We study the differences and connections of existing learned video compression methods by detailing the underlying sequential latent variable models (Section 3).

Our solution achieves state of the art ratedistortion performance when compared with other learned video compression approaches and performs competitively with or outperforms AVC (H.264) and HEVC (H.265) traditional codecs under equivalent settings (Section 4).
2 Methodology
2.1 Problem setup
Let us denote the image frames of a video as . Compression of the video is done by an autoencoder that maps , through an encoder , into compact discrete latent codes . The codes are then used by a decoder to form reconstructions .
We assume the use of an entropy coder together with a probabilistic model on , denoted as , to losslessly compress the discrete latents. The ideal codeword length can then be characterized as which we refer to as the rate term^{1}^{1}1For practical entropy coder, there is a constant overhead per block/stream, which is negligible with a large number of bits per stream and thus can be ignored. For example, for adaptive arithmetic coding (AAC), there is up to 2bit inefficiency per stream [30]..
Given a distortion metric , the lossy compression problem can be formulated as the optimization of the following Lagrangian functional
(1) 
where is the Lagrange multiplier that controls the balance of rate and distortion. It is known that this objective function is equivalent to the evidence lower bound in VAE[15] when the encoder distribution is deterministic or has a fixed entropy. Hence is often called the prior distribution. We refer the reader to [12, 13, 36] for more detailed discussion. Throughout this work we use MSSSIM [40] measured in RGB space as our distortion metric for both training and evaluation.
2.2 Overview of the proposed method
In our work, we focus on the problem of online compression of video using a causal autoencoder, i.e. one that outputs codes and a reconstruction for each frame on the fly without the need to access future context. In classic video codec terms, we are interested in the low delay P (LDP) setting where only Iframes^{2}^{2}2We refer the reader to [7], [35] and Section 2 of [43] for a good overview of frame structures in classic codecs.
(Intracoded; independent of other frames) and Pframes (Predictive intercoded; using previously reconstructed past but not future frames) are used. We do not make use of Bframes (Bidirectionally interpolated frames).
The full video sequence is broken down into groups of pictures (GoP) that starts with an Iframe and is followed by Pframes. We use a separate encoder, decoder and prior for the Iframes and Pframes. The rate term is then decomposed as
(2) 
where a superscript is used to indicate frame type.
2.3 Network architecture
Iframe compression is equivalent to image compression and there are already many schemes available [4, 5, 28, 31, 36]. Here we applied the encoder and decoder design of [28] for Iframe compression. Subsequently, we only focus on the design of the Pframe compression network and discuss how previously transmitted information can be best utilized to reduce the amount of information needed to reconstruct subsequent frames.
2.3.1 Baseline architecture
One straightforward way to utilize history information is for the autoencoder to transmit only a motion vector (
e.g. block based or a dense optical field) and a residual while using the previously decoded frame(s) as a reference. At the decoder side, decoded motion vector (in our case optical flow) is used to warp previously decoded frame(s) using bilinear interpolation [20] (referred to later as warp function), which is then refined by the decoded residual. This framework serves as the basic building block for many conventional codecs such as AVC and HEVC. One instantiation of such framework built with autoencoders is illustrated in Fig. 1.Here an autoencoder based image compression network, termed Iframe network, is used to compress and reconstruct the Iframe independent of any other frames. Subsequent Pframes
are processed with a separate autoencoder, termed Pframe network, that takes both the current input frame and the previous reconstructed frame as encoder inputs and produces the optical flow tensor
and the residual as decoder output. The architecture of our Pframe network’s encoder and decoder is the same as the Iframe. Two separate prior models are used for the entropy coding of the discrete latents in Iframe and Pframe networks. The frame is eventually reconstructed as .For the Pframe network, the latent at each time contains information about . One source of inefficiency, then, is that the current time step’s may still exhibit temporal correlation with the past values , which is not exploited. This issue remains even if we consider multiple frames as references.
To explicitly equip the network with the capability to utilize the redundancy with respect to , we would need to expose as another input, besides , to both the encoder and the decoder for the operation at time step . In this case the latents would only need to carry information regarding the incremental difference between the flow field and residual between two consecutive steps, i.e. and , which would lead to a higher degree of compression. We could follow the same principle to utilize even higher order redundancies but it inevitably leads to a more complicated architecture design.
2.3.2 Feedback recurrent module
As we are trying to utilize higher order redundancy, we need to provide both the encoder and the decoder with a more suitable decoder history context. This observation motivates the use of a recurrentneural network that is meant to accumulate and summarize relevant information received previously by the decoder, and a decodertoencoder feedback connection that makes the recurrent state available at the encoder [45] – see Fig. 2(a). We refer to the added component as the feedback recurrent module.
This module can be viewed as a nonlinear predictive coding scheme, where the decoder predicts the next frame based on the current latent as well as a summary of past latents. Because the recurrent state is available to both encoder and decoder, the encoder is aware of the state of knowledge of the decoder, enabling it to send only complementary information.
The concept of feedback recurrent connection is present in several existing works: In [45], the authors refer to an autoencoder with the feedback recurrent connection as FRAE and demonstrate its effectiveness in speech compression in comparison to different recurrent architectures. In [11, 10], the focus is on progressive coding of images, and the decoder to encoder feedback is adopted for an iterative refinement of the image canvas. In the domain of video compression, [32] proposes the concept of statepropagation, where the network maintains a state tensor that is available at both the encoder and the decoder and updated by summing it with the decoder output in each time step (Fig. 5 in [32]). In our network architecture, we propose the use of generic convolutional recurrent modules such as ConvGRU [34] for the modeling of history information that is relevant for the prediction of the next frame. In Appendix 0.E, we show the necessity of such feedback connection from an information theoretical point of view.
2.3.3 Latent quantization and prior model
Given that the encoder needs to output discrete values, the quantization method applied needs to allow gradient based optimization. Two popular approaches are: (1) Define a learnable codebook, and use nearest neighbor quantization in the forward pass and a differentiable softmax in the backward pass [28, 12], or (2) Add uniform noise to the continuous valued encoder output during training and use hard quantization at evaluation [4, 5, 13]. For the second approach, the prior distribution is often characterized by a learnable monotonic CDF function
, where the probability mass of a single latent value
is evaluated as . In this work we adopt the first approach.As illustrated in Fig. 2(a), a timeindependent prior model is used. In other words, there is no conditioning of the prior model on the latents from previous time steps, and in Eq. (2) is factorized as . In Section 3 we show that such factorization does not limit the capability of our model to capture any empirical data distribution.
2.3.4 Explicit optical flow estimation module
In the architecture shown in Fig. 2(a), the optical flow estimate tensor
is produced explicitly at the end of the decoder. When trained with the loss function
in Eq. (1), the learning of the optical flow estimation is only incentivized implicitly via how much that mechanism helps with the rateeffective reconstruction of the original frames. In this setting we empirically found the decoder was almost solely relying on the residuals to form the frame reconstructions. This observation is consistent with the observations of other works [27, 32]. This problem is usually addressed by using pretraining or more explicit supervision for the optical flow estimation task. We encourage reliance on both optical flow and residuals and facilitate optical flow estimation by (i) equipping the encoder with a UNet [33] subnetwork called Motion Estimation Network (), and (ii) introducing additional loss terms to explicitly encourage optical flow estimation.estimates the optical flow between the current input frame and the previously reconstructed frame . Without , the encoder is provided with and , and is supposed to estimate the optical flow and the residuals and encode them, all in a single network. However, when attached to the encoder, provides the encoder directly with the estimated flow and the previous reconstruction warped by the estimated flow . In this scenario, all the encoder capacity is dedicated to the encoding task. A schematic view of this explicit architecture is shown in Fig. 2(b) and the implementation details are available in Appendix 0.A.
When is integrated with the architecture in Fig. 2(a), optical flow is originally estimated using denoted as and later reconstructed in the decoder denoted as . In order to alleviate the problem with optical flow learning using ratedistortion loss only, we incentivize the learning of and via two additional dedicated loss terms,
Hence the total loss we start the training with is where is the loss function as per Eq. (1). We found that it is sufficient if we apply the losses and only at the beginning of training, for the first few thousand iterations, and then we revert to using just and let the estimated flow be adapted to the main task, which has been shown to improve the results across variety of tasks utilizing optical flow estimation [44].
The loss terms and are defined as the distortion between and the warped version of . However, early in the training, is inaccurate and as a result, such distortion is not a good choice of a learning signal for the or the autoencoder. To alleviate this problem, early in the training we use instead of in both and . This transition is depicted in Fig. 2(b) with a switch. It is worth to mention that the tensor fed into the encoder is always a warped previous reconstruction , never a warped previous ground truth frame .
3 Graphical Model Analysis
Recent success of autoencoder based learned image compression [4, 5, 28, 31, 36] (see [18] for an overview) has demonstrated neural networks’ capability in modeling spatial correlations from data and the effectiveness of endtoend joint ratedistortion training. It has motivated the use of autoencoder based solution to further capture the temporal correlation in the application of video compression and there has since been many designs for learned video compression algorithms [45, 32, 27, 26, 12, 13]. These designs differ in many detailed aspects: the technique used for latent quantization; the specific instantiation of encoder and decoder architecture; the use of atypical, often domainspecific operations beyond convolution such as Generalized Divisive Normalization (GDN) [3, 4] or NonLocal Attention Module (NLAM) [25]. Here we leave out the comparison of those aspects and focus on one key design element: the graphical model structure of the underlying sequential latent variable model [23].
As we have briefly discussed in Section 2.1, the ratedistortion training objective has strong connections to amortized variational inference. In the special case of , the optimization problem is equivalent to the minimization of [2] where describes a sequential generative process and can be viewed as a sequential inference process. denotes the decoder distribution induced by our distortion metric (see section 4.2 of [12]), denotes the empirical distribution of the training data, and denotes the encoder distribution induced on the latents, in our case a Dirac distribution because of the deterministic mapping from the encoder inputs to the value of the latents.
To favor the minimization of the KL divergence, we want to design , , to be flexible enough so that (i) the marginalized data distribution is able to capture complex temporal dependencies in the data, and (ii) the inference process encodes all the dependencies embedded in [41]. In Fig. 3 we highlight four combinations of sequential latent variable models and the inference models proposed in literature.
Fig. 3(a) without the dashed line describes the scheme proposed in Lu et al. [27] (and the low latency mode of HEVC/AVC) where prior model is fully factorized over time and each decoded frame is a function of the previous decoded frame and the current latent code . There are two major limitations of this approach: firstly, the marginalized sequential distribution is confined to follow Markov property ; secondly, it assumes that the differences in subsequent frames (e.g. optical flow and residual) are independent across time steps. To overcome these two limitations, Liu et al. [26] propose an autoregressive prior model through the use of ConvLSTM over time, which corresponds to Fig. 3(a) including dashed lines. In this case, the marginal is fully flexible in the sense that it does not make any conditional independence assumptions between , see Appendix 0.F.1 for details.
Fig. 3(b) describes another way to construct a fully flexible marginalized sequential data distribution, by having the decoded frame depend on all previously transmitted latent codes. Both Rippel et al. [32] and our approach fall under this category by introducing a recurrent state in the decoder which summarizes information from the previously transmitted latent codes and processed features. Rippel et al. [32] use a multiscale state with a complex state propagation mechanism, whereas we use a convolutional recurrency module (ConvGRU) [34]. In this case, is also fully flexible, details in Appendix 0.F.1.
The latent variable models in Fig. 3(c) and (d) break the causality constraint, i.e. the encoding of a GoP can only start when all of its frames are available. In (c), a global latent is used to capture timeinvariant features, which is adopted in Han et al. [13]. In (d), the graphical model is fully connected between each frame and latent across different time steps, which described the scheme applied in Habibian et al. [12].
One advantage of our approach (as well as Rippel et al. [32]), which corresponds to Fig. 3(b), is that the prior model can be fully factorized across time, , without compromising the flexibility of the marginal distribution . Factorized prior allows parallel entropy decoding of the latent codes across time steps (but potentially still sequential across dimensions of within each time step). On the inference side, in Appendix 0.E we show that any connection from to , which could be modeled by adding a recurrent component on the encoder side, is not necessary.
4 Experiments
4.1 Training setup
Datasets
Our training dataset was based on Kinetics400 [21]. We selected a subset of the highquality videos in Kinetics (width and height over 720 pixels), took the first 16 frames of each video, and downscaled them (to remove compression artifacts) such that the smaller of the dimensions would be 256 pixels. At train time, random crops of size were used. For validation, we used videos from HDgreetings, NTIA/ITS and SVT.
Training
All implementations were done in the PyTorch framework
[29]. The distortion measure MSSSIM [40] was normalized per frame and the rate term was normalized per pixel (i.e., bits per pixel) before they were combined . We trained our network with five values to obtain a ratedistortion curve. The Iframe network and Pframe network were trained jointly from scratch without pretraining for any part of the network.The GoP size of was used during training, i.e.
the recurrent Pframe network was unrolled 7 times. We used a batch size of 16, for a total of 250k gradient updates (iterations) corresponding to 20 epochs. When training with the flow enhancement network
, we applied and until 20k iterations, and the transition of the input from to was done at iteration 15k. All models were trained using Adam optimizer [22] with the initial learning rate of , and . The learning rate was annealed by a factor of 0.8 every 100k iterations. Performance on the validation set was evaluated every epoch, results presented below are the model checkpoints performing best on the validation set. See Appendix 0.A for details on the architecture and training.4.2 Comparison with other methods
In this section, we compare the performance of our method with existing learned approaches as well as popular classic codecs.
Comparison with learningbased methods
In Fig. 4(a), we compare the performance of our solution with several learned approaches on UVG p dataset in terms of MSSSIM versus bitrate where MSSSIM was first averaged per video and then averaged across different videos. Our method outperforms all the compared ones in MSSSIM across the range of bitrate between around bpp to around bpp. Lu et al. [27] and Liu et al. [26] are both causal solutions and they are evaluated with GoP size of 12. Habibian et al. and Wu et al. are noncausal solutions and their results are reported based on GoP sizes of 8 and 12, respectively. Our results are evaluated at GoP of 8, same as in training.
Comparison with traditional video codecs
We compared our method with the most popular standard codecs i.e. H.265 [35] and H.264 [42] on UVG p dataset. The results are generated with three different sets of settings (more details are provided in Appendix 0.C):
The low latency mode was enforced to H.264 and H.265 to make the problem settings the same as our causal model. GoP size 12 on the other hand, although different from our GoP size 8, was consistent with the settings reported in other papers and provided H.264 and H.265 an advantage as they perform better with larger GoP sizes.
The comparisons are shown in Fig. 4(b) in terms of MSSSIM versus bitrate where MSSSIM was calculated in RGB domain. As can be seen from this figure, our model outperforms the HM implementation of H.265 and the ffmpeg implementation of H.265 and H.264 in both low latency and default settings, at bitrates above . For 1080p resolution this is the range of bitrates of interest for practitioners, e.g. Netflix uses the range of about for their 1080p resolution video streaming [17].
Qualitative comparison
In Fig. 5 we compare the visual quality of our lowest rate model with ffmpeg implementation of H.265 at in low latency mode a similar rate. We can see that our result is free from the blocking artifact usually present in H.265 at low bitrate – see around the edge of fingers – and preserves more detailed texture – see structures of hand veins and strips on the coat. In Appendix 0.D, we provide more examples with detailed error and bitrate maps.
As noted in Section 2.3.4, the dedicated optical flow enforcement loss terms were removed at a certain point and the training continued using only. As a result, our network learned a form of optical flow that contributed maximally to and did not necessarily follow the ground truth optical flow (if such were available). Fig. 6 compares an instance of the optical flow learned in our network with the corresponding optical flow generated using a pretrained FlowNet2 network [19]. The optical flow reconstructed by the decoder has larger proportion of values set to zero than the flow estimated by FlowNet2 – this is consistent with the observations by Lu et al. [27] in their Fig. 7 where they argue that it allows the flow to be more compressible.
4.3 Ablation study
To understand the effectiveness of different components in our design, in Fig. 8 we compare the performance after removing (a) decoder to encoder recurrent feedback, or (b) the feedback recurrent module in Fig. 2(a), or (c) the explicit optical flow estimation module. We focus the comparison on low rate regime where the difference is more noticeable.
Empirically we find the explicit optical flow estimation component to be quite essential – without it the optical flow output from the decoder has very small magnitude and is barely useful, resulting in consistent loss of at least 0.002 in MSSSIM score for rate below 0.16 bpp. In comparison, the loss in performance after removing either the decoder to encoder feedback or the recurrent connection all together is minor and only show up at very low rate region below 0.1 bpp.
Similar comparisons have been done in existing literature. Rippel et al. [32] report large drop in performance when removing the learned state in their model (Fig. 9 of [32]; about 0.005 drop in MSSSIM at bpp of 0.05), while Liu et al. [26], which utilizes recurrent prior to exploit longer range temporal redundancies, reports relatively smaller degradation after removing their recurrent module (about 0.002 drop in MSSSIM in Fig. 9 of [26]).
We suspect that the lack of gain we observe from the feedback recurrent module is due to both insufficient recurrent module capacity and receptive field, and plan to investigate further as one of our future studies.
4.4 Empirical observations
4.4.1 Temporal consistency
We found, upon visual inspection, that for our low bitrate models, the decoded video displays a minor yet noticeable flickering artifact occurring at the GoP boundary, which we attribute to two factors.
Firstly, as shown in Fig. 7, there is a gradual degradation of Pframe quality as it moves further away from the Iframe. It is also apparent from Fig. 7 that the rate allocation tends to decline as Pframe network unrolls, which compounds the degradation of Pframe quality. This shows that the endtoend training of frameaveraged ratedistortion loss does not favor temporal consistency of frame quality, which makes it difficult to apply the model with larger GoP sizes and motivates the use of temporalconsistency related losses [9, 24].
Second part of the problem is due to the nature of the MSSSIM metric and hence is not reflected in Fig. 7. This aspect is described in the following section.
4.4.2 Color shift
Another empirical observation is that the MSSSIM metric seems to be partially invariant to uniform color shift in flat regions and due to that the low bitrate models we have trained are prone to exhibit slight color shift, similar to what has been reported in [46]. This color change is sometimes noticeable during Iframe to Pframe transitioning and contributes to the flickering artifact. One item for future study is to find suitable metrics or combination of metrics that captures both texture similarity and color fidelity. See Appendix 0.G for details.
5 Conclusion
In this paper, we proposed a new autoencoder based learned lossy video compression solution featuring a feedback recurrent module and an explicit optical flow estimation module. It achieves state of the art performance among learned video compression methods and delivers comparable or better ratedistortion when compared with classic codecs in low latency mode. In future work, we plan to improve its temporal consistency, solve the color fidelity issue, and reduce computational complexity, paving the way to a practical realtime learned video codec.
References
 [1] 2019 global internet phenomena report. Note: https://www.ncta.com/whatsnew/reportwheredoesthemajorityofinternettrafficcomeAccessed: 20200228 Cited by: §1.
 [2] (2018) Fixing a Broken ELBO. In ICML, Cited by: §3.
 [3] (2016) Density Modeling of Images using a Generalized Normalization Transformation. In iclr, Cited by: §3.
 [4] (2017) EndtoEnd Optimized Image Compression. In iclr, Cited by: §2.3.3, §2.3, §3.

[5]
(2018)
Variational Image Compression with a Scale Hyperprior
. In iclr, Cited by: §2.3.3, §2.3, §3.  [6] (2000) The OpenCV Library. Dr. Dobb’s Journal of Software Tools. Cited by: §0.C.1, §4.1.
 [7] Digital video introduction. Note: https://github.com/leandromoreira/digital_video_introduction/blob/master/README.md#frametypesAccessed: 20200302 Cited by: footnote 2.
 [8] Ffmpeg. Note: http://ffmpeg.org/Accessed: 20200221 Cited by: 1st item, 2nd item.
 [9] (2019) ReCoNet: RealTime Coherent Video Style Transfer Network. In accv, Cited by: §4.4.1.
 [10] (2016) Towards Conceptual Compression. In nips, Cited by: Appendix 0.E, §2.3.2.

[11]
(2015)
DRAW: A Recurrent Neural Network For Image Generation
. In icml, Cited by: Appendix 0.E, §2.3.2.  [12] (2019) Video Compression With RateDistortion Autoencoders. In iccv, Cited by: Figure 9, Appendix 0.A, Appendix 0.F, §1, §1, §2.1, §2.3.3, §2.3.3, Figure 3, §3, §3, §3.
 [13] (2019) Deep Probabilistic Video Compression. In nips, Cited by: Appendix 0.F, §1, §2.1, §2.3.3, Figure 3, §3, §3.
 [14] (2016) Deep Residual Learning for Image Recognition. In cvpr, Cited by: §1.
 [15] (2017) betaVAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In iclr, Cited by: §2.1.
 [16] High Efficiency Video Coding (HEVC). Note: https://hevc.hhi.fraunhofer.de/Accessed: 20200221 Cited by: 3rd item.
 [17] How much data does Netflix use?. Note: https://www.howtogeek.com/338983/howmuchdatadoesnetflixuse/Accessed: 20200228 Cited by: §1, §4.2.
 [18] (2020) Learning endtoend lossy image compression: a benchmark. arXiv:2002.03711. Cited by: §3.
 [19] (2017) FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. In cvpr, Cited by: §4.2.
 [20] (2015) Spatial Transformer Networks. In nips, Cited by: §2.3.1.
 [21] (2017) The Kinetics Human Action Video Dataset. arxiv:1705.06950. Cited by: §4.1.
 [22] (2015) Adam: A Method for Stochastic Optimization. In iclr, Cited by: §4.1.
 [23] (2009) Probabilistic graphical models: principles and techniques. MIT Press. External Links: ISBN 9780262013192, LCCN 2009008615 Cited by: §0.F.1, §3.
 [24] (2018) Learning Blind Video Temporal Consistency. In eccv, Cited by: §4.4.1.
 [25] (2019) Nonlocal Attention Optimized Deep Image Compression. arXiv:1904.09757. Cited by: §3.
 [26] (2020) Learned Video Compression via Joint SpatialTemporal Correlation Exploration. In aaai, Cited by: §0.F.1, Appendix 0.F, §1, Figure 3, §3, §3, §4.2, §4.3.
 [27] (2019) DVC: An EndtoEnd Deep Video Compression Framework. In cvpr, Cited by: §1, §2.3.4, Figure 3, §3, §3, §4.2, §4.2.
 [28] (2018) Conditional Probability Models for Deep Image Compression. In cvpr, Cited by: Figure 9, §2.3.3, §2.3, §3.
 [29] (2019) PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In nips, Cited by: §4.1.
 [30] (2011) Digital Signal Compression: Principles and Practice. Cambridge University Press. Cited by: footnote 1.
 [31] (2017) Realtime adaptive image compression. In icml, Cited by: §2.3, §3.
 [32] (2019) Learned Video Compression. In iccv, Cited by: Appendix 0.F, §1, §2.3.2, §2.3.4, Figure 3, §3, §3, §3, §4.3.
 [33] (2015) UNet: Convolutional Networks for Biomedical Image Segmentation. In miccai, Cited by: §2.3.4.

[34]
(2015)
Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
. In nips, Cited by: §1, §2.3.2, §3.  [35] (201212) Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE Trans. Circuits Syst. Video Technol. 22 (12), pp. 1649–1668. External Links: ISSN 10518215 Cited by: §4.2, footnote 2.
 [36] (2017) Lossy Image Compression with Compressive Autoencoders. In iclr, Cited by: §2.1, §2.3, §3.
 [37] Ultra Video Group test sequences. Note: http://ultravideo.cs.tut.fi/Accessed: 20200221 Cited by: §1, §4.1.
 [38] (2016) Conditional Image Generation with PixelCNN Decoders. In nips, Cited by: §1, §2.3.3.
 [39] (200901) Mean Squared Error: Love it or leave it? A new look at Signal Fidelity Measures. In IEEE Signal Processing Magazine, Vol. 26, pp. 98–117. External Links: ISSN 15580792 Cited by: §0.G.1.
 [40] (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing 13 (4), pp. 600–612. Cited by: §0.G.1, §1, §2.1, §4.1.
 [41] (2018) Faithful Inversion of Generative Models for Effective Amortized Inference. In nips, Cited by: §3.
 [42] (200307) Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology 13 (7), pp. 560–576. External Links: ISSN 15582205 Cited by: §4.2.
 [43] (2018) Video Compression through Image Interpolation. In eccv, Cited by: Appendix 0.F, §1, footnote 2.
 [44] (201902) Video Enhancement with TaskOriented Flow. ijcv 127 (8). Cited by: §2.3.4.
 [45] (2019) Feedback Recurrent AutoEncoder. In ICASSP, Cited by: §1, §2.3.2, §2.3.2, §3.
 [46] (2017) Loss Functions for Image Restoration With Neural Networks. In IEEE Transactions on Computational Imaging, Cited by: §0.G.1, §4.4.2.
Appendix 0.A Model and training details
Detailed architecture of the encoder and decoder in Fig. 2(a) is illustrated in Fig. 9. For prior modeling, we use the same network architecture as described in Section B.2 of [12]. Detailed architecture of the optical flow estimation network in Fig. 2(b) is illustrated in Fig. 10.
As can be seen from Fig. 9, we use BatchNorm throughout the encoder and decoder architecture to help stabilize training. However, since previous decoded frame is fed back into the encoder in the next time step, during training it is not possible to apply the same normalization across all the unrolled time steps, which will lead to an inconsistency across training and evaluation. To solve this issue, we switch BatchNorm to evaluation mode during training at iteration 40k, after which its parameters are no longer updated.
Appendix 0.B Images used in figures
Appendix 0.C ffmpeg and Hm experiments
We generated H.264 and H.265 baselines using ffmpeg and HM software tools. Both HM and ffmpeg received the UVG videos in the native YUV1080p8bit format as inputs. The command that we used to run ffmpeg in lowlatency mode is as follows:
ffmpeg y pix_fmt yuv420p s [W]x[H] r [FR] i [IN].yuv
c:v libx[ENC] b:v [RATE]M maxrate [RATE]M tune zerolatency
x[ENC]params "keyint=[GOP]:minkeyint=[GOP]:verbose=1" [OUT].mkv
and the command that we used to run ffmpeg in default settings is as follows:
ffmpeg y pix_fmt yuv420p s [W]x[H] r [FR] i [IN].yuv
c:v libx[ENC] b:v [RATE]M maxrate [RATE]M
x[ENC]params "verbose=1" [OUT].mkv
where the values in brackets represent the encoder parameters as follows: H and W are the frame dimensions (), FR is the frame rate (120), ENC is the encoder type (x264 or x265), GOP is the GoP size (12 for lowlatency settings), INPUT and OUTPUT are the input and the output filenames, respectively, RATE controls the intended bit rate in Megabits/second (We tried RATE Megabits/second that translate to bits/pixel for UVG 1080p at 120 fps). It is worth to mention that we did not set the preset flag in ffmpeg that controls the coding performance. As a result, it used the default value which is medium, unlike some existing papers that employed ffmpeg in fast or superfast presets that led to poor ffmpeg performance.
The command we used to run HM is as follows:
TAppEncoderStatic c [CONFIG].cfg i INPUT.yuv wdt [W] hgt [H]
fr [FR] f [LEN] o OUTPUT.yuv b ip [GOP] q [QP] > [LOG].log
where CONFIG is the HM configuration file (we used LowDelayP.cfg), LEN is the number of frames in the video sequence (600 or 300 for UVG videos), LOG is the log file name that we used to read the bit rate values, and QP control the quality of the encoded video that leads to different bit rates (we tried QP in this work).
MSSSIM for all the experiments was calculated in RGB domain in videolevel. bit rates for ffmpeg were calculated by reading the compressed file size and for HM by parsing the HM log file. Both values were then normalized by framedimensions and framerate.
0.c.1 YUV to RGB conversion inconsistencies
Since the UVG raw frames are available in YUV format and our model works in RGB domain only, we had to convert the UVG frames from YUV to RGB color space. We considered two methods to convert UVG1080p8bit videos to RGB , i) use ffmpeg to read in YUV format and directly save the frames in RGB format, ii) read the frames in YUV and convert them to RGB via COLOR_YUV2BGR_I420 functionality provided by the OpenCV [6] package. We also tried a third scenario, iii) use ffmpeg to read UVG4K10bit in YUV format and save the RGB frames. Fig. 11 shows the performance of our model on the three different versions of UVG RGB. As can be seen from this figure, there is an inconsistency in our model performance across different UVG versions. A potential cause for this behavior could be the interpolation technique employed in the color conversion method, since in YUV420 format U and V channels are subsampled and need to be upsampled before YUV to RGB conversion. The results reported in this paper are based on the OpenCV color conversion.
Appendix 0.D Qualitative examples
We provide in this section more qualitative examples for our method in Fig. 12 and for H.265 in Fig. 13. Notably, we overlay the MSSSIM and MSE error maps on the original frame to provide indication of the distortion characteristics of each algorithm. In addition in Fig. 12, we show the BPP maps, as we have access to the entropy of the corresponding latent using the Iframe and Pframe prior models. We average the entropy across channels, then use bilinear upsampling to bring the BPP map back to the original image resolution.
By examining the BPP maps in Fig. 12 (third row), we can see that intuitively the model spends bits more uniformly across Iframes (first and last column), while during Pframes bits are mostly spent around edges of moving objects.
Note that our method optimizes for MSSSIM, while traditional video codecs like H.265 are evaluated with PSNR. As detailed in Appendix 0.G, MSSSIM is insensitive to uniform color shifts and mainly focuses on spatial structure. On the contrary, PSNR does pick up color shifts, but it tends to care less for fine details in spatial structure, i.e. texture, sharp edges. Our method’s MSSSIM maps (Fig. 12 4th row) show edges are sharply reconstructed, yet does not fully capture some of the color shifts in the reconstruction. In particular the man in the pink suit has a consistently dimmer pink suit in the reconstruction (compare first and second row), which does not appear in the MSSSIM loss, while MSE seems to pick up the shift. In contrast in Fig. 13, H.265 MSE maps (last row) shows that the error is concentrated around most edges and fine textures due to blurriness, yet most colors are on target.
We observe that for our method, MSSSIM error maps degrade as we go from Iframe to Pframe (compare first and second to last column in Fig. 12). Similarly, the BPP map of the first Iframe show a large rate expenditure, while consecutive Pframes display significantly less rate. Both of these qualitative results confirm the quantitative analysis of Fig. 7.
Appendix 0.E Theoretical justification of the feedback recurrent module
In this subsection we provide theoretical motivations on having feedback recurrency module in our network design. Specifically, we show that, for the compression of sequential data with a causal decoder and causal encoder, (i) it is imperative for encoder to have a memory that summarizes previously latent codes, and (ii) it is not beneficial for encoder to access history input information.
Below is an abstract view of a generic sequential autoencoder, where denotes a discretetime random process that represents the empirical distribution of our training video data, with and being the induced latent and reconstruction processes.
From an information theoretical point of view, the sequential autoencoding problem can be abstracted as the maximization of average framewise mutual information between and , detailed below, with certain rate constraint on ,
framewise mutual information^{3}^{3}3Note that this is different from . If we instead maximize , then we could have a single output frame capturing the information of more than one input frames, which is not what we want.  
s.t.  decoder causality 
The decoder causality is encoded in such a form as is zero if and only if is not a function of . It is important to note that mutual information is invariant to any bijections and thus would not reflect perceptual quality of reconstruction. Nevertheless, we use this formulation only to figure out important data dependencies from an information theory perspective and use that to guide us in the design of network architecture. With a closer look, the term in the objective function can be rewritten as below.
(3) 
Step (a) incorporates the decoder causality constraint, step (b) comes from the definition of multivariate mutual information^{4}^{4}4. For a Markov process , this indicates the amount of information flown from to through ., and the rest uses the identity of conditional mutual information^{5}^{5}5 for any ..
Equation (3) says that can be broken down into two terms, the first represents the prediction of from all previous latents , and the second represents new information in about that is not be explained by . While this formulation does not indicate how the optimization can be done, it tells us what variables should be incorporated in the design of . Let us focus on : in our objective function , shows up in the following terms
It is clear, then, that the optimal should only be a function of , , and . Combining it with the constraint that the encoder is causal, we can further limit the dependency to and . In other words, it suffices to parameterize the encoder function as , which attests to the two claims at the beginning of this subsection: (i) should be a function of and (ii) does not need to depend on .
It is with these two claims that we designed the network where the decoder recurrent state, which provides a summary of , is fed back^{6}^{6}6Since is a deterministic function of , any additional input of to the encoder is also justified. to the encoder as input at time step and there is no additional input related to .
It is worth noting that in [11, 10], the authors introduce a neural network architecture for progressive coding of images, and in it an encoder recurrent connection is added on top of the feedback recurrency connection from the decoder. Based on the analysis in this section, since the optimization of does not depend on , we do not include encoder recurrency in our network design.
Appendix 0.F Graphical modeling considerations
In this section we give more details on the reasoning behind how we formulated the graphical models presented in Fig. 3.
Wu et al. [43]
Generative model.
Temporally independent prior is used hence no edges between the latent variables .
Consecutive reconstructions depend on the previous reconstructions and hence the edge between them .
Inference model.
During inference,
at timestep ,
the current timestep’s latent is inferred based on the previous reconstruction and current original frame .
In turn, the previous reconstruction is determined based on the information from the previous timestep’s latent and the earlier reconstruction .
Since the procedure of determining from and
is deterministic, the inference model has an edge .
Extending this point, since is updated at each timestep using a deterministic procedure using its previous value and , it makes dependent on all the past latent codes .
Hence there are edges from all the past latents to the present latent .
The edge arises from the direct dependence of on through the encoder.
Liu et al. [26]
Generative model.
Due to the introduction of the timeautoregressive code prior there are additional edges as compared to model (a) with dashed edges.
Inference model.
The inference model remains unchanged.
Rippel et al. [32] and ours
Generative model.
Model (b) differs from model (a) in the introduction of the recurrent connection and hence hidden state , hence the full graphical model could be drawn as per Fig. 14.
However, the
is not a stochastic random variable, but a result of a deterministic function of
and . For that reason we can drop explicitly drawing nodes and move the deterministic relationship they are implying into the graphical model edges between the latent variables . This way we arrive from the models in Fig. 14 to models in Fig. 3(b).Inference model. Analogous reasoning applies in the case of the inference model.
Han et al. [13]
Graphical models are derived based on equations by Han et al. [13]: Eq. (4) and Eq. (6) for the generative model, and Eq. (5) for the inference model.
Habiban et al. [12]
Generative model.
Due to the use of a prior which is autoregressive in the temporal dimension (across the blocks of 8 frames that Habiban et al. are using) we draw edges .
The latent codes are generated by a deterministic composition of 3D convolutional layers with the input of the original frames. The temporal receptive field is larger than original input sequence which means that every latent variable in the block can be influenced by each of the original input frames and hence the fully connected clique between and .
Inference model.
The same reasoning applies for the inference model.
0.f.1 Full flexibility of the marginal
The graphical model corresponding to the approach of Liu et al. [26] is presented in Fig. 3(a) including dashed lines. In this case, the marginal is fully flexible in the sense that it does not make any conditional independence assumptions between .
To see that consider considering what happens when we marginalize out from using the variable elimination algorithm [23]. Assume we use an elimination order which is permutation of a set of numbers , e.g. for a viable permutation is . Let’s think about the process of variable elimination in terms of the induced graph. Eliminating the first variable induces a connection between and each of the other latent variables . Eliminating consecutive latent variables will induce connections between corresponding observed variables and every other . This way the final latent variable to be eliminated will be connected in the induced graph with all the observed variables . Hence when we eliminate that will result in a factor (a clique between all nodes thinking in terms 2of the induced graph). The marginal distribution we are looking for is , and it makes no conditional independence assumptions.
The graphical model corresponding to our and Rippel et al. approach is presented in Fig. 3(b). In this case, showing that the marginal makes no conditional independence assumptions between is simpler – no matter what variable elimination order we choose, since is connected to all the nodes eliminating will always result in a factor .
Appendix 0.G MSSSIM color issue and corner artifact
0.g.1 MSSSIM color shift artifact
MSSSIM, introduced in [40], is known to be to some extent invariant to color shift [46]. Inspired by the study in Fig. 4 of [39], we designed a small experiment to study the extent of this issue. We disturbed an image slightly with additive Gaussian noise to obtain , such that lies on a certain equalMSSSIM hypersphere (we choose 0.966 which corresponds to the average quality of our second lowest rate model). Then we optimize for lower PSNR, under the constraint of equal MSSSIM using the following objective:
(4) 
We use Adam optimizer with a learning rate of (otherwise default parameters) for iterations. We use , with representing the iteration index.
In Fig. 15, (a) corresponds to , (b) corresponds to , (c) to at iteration and (d) to at iteration . All frame (b), (c) and (d) are on the equalMSSSSIM hypersphere of 0.966 with respect to the original frame . We can observe that it is possible to obtain an image with very low PSNR, mainly due to drastic uniform color shifts, while remaining at equal distance in terms of MSSSIM from the original image. This helps explain some of the color artifacts we have seen in the output of our models.
0.g.2 MSSSIM corner artifact
The standard implementation of MSSSIM, when used in , causes corner artifacts as shown in Figure 16
. Although the artifact is subtle and barely noticeable when looking at fullsize video frames, it is worth investigation. We found the root cause in the convolutionbased implementation of local mean calculations in SSIM function where a Gaussian kernel (normally of size 11) is convolved with the input image. The convolution is done without padding, as a result the contribution of the corner pixels to MSSSIM is minimal as they fall at the tails of the Gaussian kernel. This fact is not problematic when MSSSIM is used for evaluation, but can negatively affect training. Specifically in our case, where MSSSIM was utilized in conjunction with rate to train the network, the network took advantage of this deficiency and assigned less bits to the corner pixels as they are less important to the distortion term. As a result, the corner pixels appeared blank in the decoded frames. We fixed this issue by padding,
e.g. replicate, the image before convolving it with the Gaussian kernel. The artifact was eliminated as can be seen from Fig. 16.It is worth to mention that we used the standard MSSSIM implementation in all the trainings and evaluations in this paper for consistency with other works.
Comments
There are no comments yet.