DeepAI
Log In Sign Up

Slimmable Video Codec

05/13/2022
by   Zhaocheng Liu, et al.
2

Neural video compression has emerged as a novel paradigm combining trainable multilayer neural networks and machine learning, achieving competitive rate-distortion (RD) performances, but still remaining impractical due to heavy neural architectures, with large memory and computational demands. In addition, models are usually optimized for a single RD tradeoff. Recent slimmable image codecs can dynamically adjust their model capacity to gracefully reduce the memory and computation requirements, without harming RD performance. In this paper we propose a slimmable video codec (SlimVC), by integrating a slimmable temporal entropy model in a slimmable autoencoder. Despite a significantly more complex architecture, we show that slimming remains a powerful mechanism to control rate, memory footprint, computational cost and latency, all being important requirements for practical video compression.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/29/2021

Slimmable Compressive Autoencoders for Practical Neural Image Compression

Neural image compression leverages deep neural networks to outperform tr...
09/09/2022

Learning sparse auto-encoders for green AI image coding

Recently, convolutional auto-encoders (CAE) were introduced for image co...
12/11/2019

Variable Rate Deep Image Compression with Modulated Autoencoder

Variable rate is a requirement for flexible and adaptable image and vide...
07/28/2021

Insights from Generative Modeling for Neural Video Compression

While recent machine learning research has revealed connections between ...
07/13/2022

Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression

For neural video codec, it is critical, yet challenging, to design an ef...
08/02/2022

Streaming-capable High-performance Architecture of Learned Image Compression Codecs

Learned image compression allows achieving state-of-the-art accuracy and...
09/06/2022

An Adaptive Column Compression Family for Self-Driving Databases

Modern in-memory databases are typically used for high-performance workl...

1 Introduction

During the last two decades, video has become the dominant form of communication of the digital society. This has led to an explosive growth where video content accounts for more than 80% of global data traffic. The basic (lossy) video compression objective consists of transmitting as few bits as possible (i.e. minimize rate) while representing the input sequence at a certain level of fidelity (i.e. distortion). Video is now consumed using heterogeneous devices ranging from TV sets to smartphones. Furthermore, real-time video conferencing has become a household technology, pervasive in work and educational environments. These practical scenarios imposes additional constraints to the design of video codec in practice, such as dynamically controllable rate, low computational and memory footprint, and low latency. Together with the previous rate and distortion objectives, they conform the more challenging problem of practical video compression.

In parallel, the deep learning revolution has motivated a new compression paradigm based on parametric encoders and decoders implemented as deep neural networks which are optimized with data. This compression approach has been applied successfully first in images 

[2016arXiv161101704B, 2018arXiv180201436B, 2018arXiv180902736M] and then videos [DVC, HLVC]

. This paradigm contrasts with the traditional hybrid video coding paradigm, based on block-based linear transforms and carefully engineered coding tools (e.g. H.264/AVC, H.265/HEVC). Focusing on improving rate-distortion performance, most neural image and video codecs are impractical, since require heavy and complex networks. Practical aspects have been always carefully considered in the design of traditional codecs. In contrast to previous works, our paper focuses chiefly on those practical constraints, proposing a lightweight and flexible design for practical neural video compression.

Our design is based on a slimmable autoencoder augmented with a slimmable temporal entropy model. This design is motivated by two recent works. Motivated by the empirical observation that lower rates do not require the use of full capacity, Yang et al. [9578334]

proposed the slimmable compressive autoencoder (SlimCAE) architecture, where the slimming becomes a flexible mechanism to both vary the rate-distortion tradeoff and control the complexity. However, extending SlimCAE to video by including temporal prediction is not trivial, since most designs require additional modules to estimate and compensate motion (e.g. optical flow nets, motion compensation nets). Slimmable designs of such modules are not straightforward, nor the potential interplay with other elements in the compression framework. Recently, Sun

et al. [STEM] proposed spatiotemporal entropy model (STEM), a motion-free framework where temporal prediction is performed directly in the entropy model without any motion estimation nor compensation. In our framework we adopt part of STEM’s entropy model and propose a slimmable version, thus having a fully slimmable codec.

In summary, this work contributes with a novel slimmable video codec (SlimVC) designed to address practical challenges in the neural video compression paradigm, via a simple slimming mechanism. Experiments show that our slimmable model can effectively exploit temporal redundancy without a significant drop in RD performance compared to that of independent models.

2 SlimCAE and STEM

2.1 Slimmable compressive autoencoder

Neural image codecs are implemented typically as compressive autoencoders (CAEs) [theis2017lossy, 2016arXiv161101704B], consisting of autoencoders augmented with quantization and entropy coding. The encoder parametrized by transforms the input image into a latent representation , which is then quantized as and the entropy encoder maps it to the bitstream . In the decoder, is mapped back to the reconstructed latent representation , and the decoder parametrized by recovers the reconstructed image

. During training, quantization is replaced by a differentiable proxy are used (additive uniform noise, in our case) and entropy coding is bypassed and the rate is approximated by the entropy of the latent representation. This requires a model of the probability distribution parametrized by

. This model, usually refer to as entropy model, has been the source of many improvements in RD performance, by including hyperpriors 

[2018arXiv180201436B]

and autoregressive models 

[2018arXiv180902736M].

CAEs are typically trained by minimizing a RD objective

(1)

where is the set of training images, is the tradeoff between the rate of the latent representations and distortion between input and reconstructed images, averaged over .

An slimmable compressive autoencoder (SlimCAE) [9578334] is a CAE whose layers are slimmable. The slimmable layers can discard part of the parameters while still performing a valid operation. This results in less expressivenes, but also lower memory footprint and computation. The SlimCAE contains sub-models, each of which is determined by a set of parameters , where . The parameters of the sub-model are a superset of the parameters in the sub-model . Finally, the sub-models are trained jointly using a joint loss

(2)

In [9578334], the authors showed that if the set of are determined properly for the specific sub-modules, SlimCAE can achieve roughly the same RD performance of independent models optimized for single fixed s.

2.2 Spatiotemporal entropy model

Figure 1: Slimmable video codec framework. In the slimmable modules (SlimFE, SlimFD, SlimHE, SlimHD, SlimTPM and SlimEPM), dashed lines represent the full capacity of the module, and solid ones the specific capacity after slimming to a particular operating point.

Sun et al. [STEM] proposed a motion-free video compression method observing that inter-frame redundacy can be exploited efficiently in the entropy module via a spatiotemporal entropy model (STEM)[STEM] without requiring motion estimation. In this model, the hyperencoder (HE) of the hyperprior receives the latent representations and of both the current frame and the previous one, allowing it to exploit temporal redundancy, reducing the rate of the side information received by the hyperdecoder (HD). In addition, only the residual latent is transmitted in the bitstream. In order to obtain more accurate distribution models, while further exploiting spatial and temporal redundancy, STEM includes a spatial prior module (SPM) and a temporal prior module (TPM), together with an entropy parameters module (EPM) that fuses the information and predicts the actual distribution parameters at time .

SPM is an autoregressive PixelCNN-like network, and provides a relatively minor gain in RD performance at significant increased computational cost and particularly, two orders of magnitud increase in latency (from tenths to tens of seconds, both reported by [STEM] and verified in our implementation). For these practical reasons, we chose not to include SPM in our framework.

3 Fully slimmable framework

The proposed framework is shown in Fig. 1, where all trainable modules are designed to be slimmable111We use switchable GDNs ., including both the feature autoencoder (i.e. SlimFE, SlimFD) and the entropy model (i.e. SlimHE, SlimHD, SlimTPM, SlimEPM). For simplicity, we assume uniform slimming, that is, the width (i.e. number of channels) in every slimmable layer is slimmed by the same factor (we use the same factors as in SlimCAE [9578334], i.e. [0.25,0.375,0.5,0.75,1]). Table 1 provides more details about the architecture of the slimmable modules. SlimVC is trained in two stages. First, we train it as an image-based SlimCAE with hyperprior. Then we discard the hyperprior, and add the remaining modules of SlimVC (note that SlimCAE’s hyperprior is image-based, while SlimHE and SlimHD have distinct architectures and designed for pairs of frames). Then we fix SlimFE and SlimFD, and train the remaining slimmable modules.

Module Architecture Params (millions)
SlimFE SConv9x9s3c48/72/96/144/192-swGDN-SConv5x5s2c48/72/96/144/192-swGDN-SConv5x5s2c48/72/96/144/192-swGDN 0.1/0.3/0.5/1.1/2
SlimFD swIGDN-SDeconv5x5s2c48/72/96/144/192-swIGDN-SDeconv5x5s2c48/72/96/144/192-swIGDN-SDeconv9x9s3c48/72/96/144/192 0.1/0.3/0.5/1.1/2
SlimHE SConv3x3s1c64/96/128/192/256-LReLU-SConv5x5s2c64/96/128/192/256-LReLU-SConv5x5s2c64/96/128/192/256 0.7/1.2/1.7/2.8/4.2
SlimHD SConv5x5s2c64/96/128/192/256-LReLU-SConv5x5s2c64/96/128/192/256-LReLU-SConv3x3s1c160/240/320/480/640 0.9/1.4/2.0/3.3/4.8
SlimTPM SConv5x5s1c107/160/213/320/426-LReLU-SConv5x5s1c133/200/267/400/533-LReLU-SConv5x5s1c160/240/320/480/640 3.0/4.8/6.7/11.1/16.2
SlimEPM SConv1x1s1c400/600/800/1200/1600-LReLU-SConv1x1s1c320/480/640/960/1280-LReLU-SConv1x1s1c96/144/192/288/384 0.8/1.2/1.8/3.0/4.6
Table 1: Details of the slimmable modules in our implementation of SlimVC. Width factors: 0.25/0.375/0.5/0.75/1. SConv/SDeconv: slimmable convolution/transposed convolution, swGDN/swIGDN: switchable GDN/IGDN, LReLU

: Leaky ReLU.

4 Experiments

4.1 Experimental settings

Datasets and training details

We use Open Images [openimages] and CLIC as training datasets [CLIC] during the first training stage, with random crops and a batch size of 16 crops. For the second stage, we use small sequences from the Vimeo-90k dataset [2017arXiv171109078X], in pixel crops and a batch size of 32 crops. The model has five RD operating points (i.e. [0.25,0.375,0.5,0.75,1], as mentioned earlier). We use a learning rate of 5e-5, and mean square error (MSE) as distortion metric.

Methods

SlimVC (GOP=): the proposed approach after the second stage of training with a group of pictures of size . SlimVC (intra-only): is the codec resulting from the first stage without exploiting temporal redundancy. Independent VCs (GOP=): uses the same architecture of SlimVC but with a single width, so each RD point corresponds to a different model trained independently for that specific RD tradeoff. For comparison we also include H.264, STEM[STEM] and DVC [DVC]. Note that DVC is significantly more complex, and uses motion estimation and compensation with temporal prediction in the pixel domain.

4.2 Rate-distortion

We compressed the first 100 frames of HEVC Class B sequences [6316136] and the Ultra Video Group test sequences [UVG] with a GOP size of 10 and 12 pictures, respectively. The RD performances of the different methods are shown in Fig.2222We included the RD curve of STEM from [STEM] for reference, but note that the architectures are not comparable: the implementation of STEM in [STEM] uses encoders and decoders with four convolutional layers, while we use three, and their entropy model leverages an autoregressive context model and an SPM, which are not used in our case.. The proposed SlimVC has a RD performance very close to that of independently trained VCs, thus showing the benefit of SlimVC in terms of providing variable rate with one single model. Comparing with SlimVC (all intra), we can see that the slimmable temporal entropy model and the second stage are effective in consistently reducing the rate at all RD points (SlimVC curves are shifted towards the left). RD performance is comparable to that of H.264, and remains below that of DVC, which is significantly more complex and lacks the flexibility of SlimVC (see next section). Besides, the design of SlimVC has still considerable room form improvement of RD performance.

Figure 2: Rate-distortion performance on the HEVC Class B dataset (top) and UVG dataset (bottom).

4.2.1 Memory and computational efficiency

We measured the efficiency of SlimVC and other baselines in terms of computational cost (in floating point operations, FLOPs) and memory footprint (in MB) when processing 1080P input sequences (i.e., 1920×1080×3). Table 2 shows that SlimVC requires significantly less computations than the other video baselines, especially in lower rates where the slimmable design is able to avoid most of the computations, leading to very significant speedups (up to 20x for low rates).

Fig.3 shows the detailed memory footprint of SlimVC and the different modules for the different widths. It shows that SlimVC is a lightweight method, whose memory footprint can be gracefully adjusted depending on the rate needs. In contrast to SlimCAE, where the feature encoder and decoder were the main bottlenecks in terms of memory and computation, in SlimVC the most critical modules in this regard are those related to entropy modeling. In particular, the temporal prediction module is the heaviest module of the codec.

Methods Low rate Medium rate High rate
Encoding SlimVC SlimFE/FD 9.6 18.5 34 56 90
SlimTPM 42.4 68 95.7 160 232.5
SlimEPM 11 17.5 25.7 43.9 66
SlimHE/HD 10 15 20.7 33.3 47.6
Total 73 119 176 293 436
Indep. VCs 73 119 176 293 436
STEM 643 643 643 643 643
STEM w/o SPM 613 613 613 613 613
DVC 3074 3074 3074 3074 3074
Decoding SlimVC SlimFE/FD 9.6 18.5 34 56 90
SlimTPM 42.4 68 95.7 160 232.5
SlimEPM 11 17.5 25.7 43.9 66
SlimHD 6 9 12.2 19.4 27.4
Total 69 113 168 279 416
Indep. VCs 69 113 168 279 416
STEM 1509 1509 1509 1509 1509
STEM w/o SPM 1479 1479 1479 1479 1479
DVC 1434 1434 1434 1434 1434
Table 2: Computational cost (GFLOPs) for the different methods for 1080P sequences.
Figure 3: Memory footprint of SlimVC for different widths (and correspondingly, bit rates).

5 Conclusion

Motivated by some practical limitations of current neural video codecs, we propose slimmable video codec (SlimVC), a novel adaptive architecture based on slimmable modules that can provide significant savings in memory and computational costs for low and mid rates, together with variable rate control with one single video model. While SlimCAE showed that slimmable codecs are promising approaches for practical neural image compression, SlimVC further advances this potential for the case of practical neural video compression.

References