iSeeBetter: Spatio-temporal video super-resolution using recurrent generative back-projection networks

06/13/2020 ∙ by Aman Chadha, et al. ∙ 1

Recently, learning-based models have enhanced the performance of single-image super-resolution (SISR). However, applying SISR successively to each video frame leads to a lack of temporal coherency. Convolutional neural networks (CNNs) outperform traditional approaches in terms of image quality metrics such as peak signal to noise ratio (PSNR) and structural similarity (SSIM). However, generative adversarial networks (GANs) offer a competitive advantage by being able to mitigate the issue of a lack of finer texture details, usually seen with CNNs when super-resolving at large upscaling factors. We present iSeeBetter, a novel GAN-based spatio-temporal approach to video super-resolution (VSR) that renders temporally consistent super-resolution videos. iSeeBetter extracts spatial and temporal information from the current and neighboring frames using the concept of recurrent back-projection networks as its generator. Furthermore, to improve the "naturality" of the super-resolved image while eliminating artifacts seen with traditional algorithms, we utilize the discriminator from super-resolution generative adversarial network (SRGAN). Although mean squared error (MSE) as a primary loss-minimization objective improves PSNR/SSIM, these metrics may not capture fine details in the image resulting in misrepresentation of perceptual quality. To address this, we use a four-fold (MSE, perceptual, adversarial, and total-variation (TV)) loss function. Our results demonstrate that iSeeBetter offers superior VSR fidelity and surpasses state-of-the-art performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

page 8

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The goal of super-resolution (SR) is to enhance a low resolution (LR) image to a higher resolution (HR) image by filling in missing fine-grained details in the LR image. The domain of SR research can be divided into three main areas: single image SR (SISR) [dong2015image, haris2018deep, haris2017inception, kim2016accurate], multi image SR (MISR) [faramarzi2013unified, garcia2012super] and video SR (VSR) [caballero2017real, tao2017detail, sajjadi2018frame, haris2019recurrent, jo2018deep].

Consider an LR video source which consists of a sequence of LR video frames , …, , …, , where we super-resolve a target frame . The idea behind SISR is to super-resolve by utilizing spatial information inherent in the frame, independently of other frames in the video sequence. However, this technique fails to exploit the temporal details inherent in a video sequence resulting in temporal incoherence. MISR seeks to address just that – it utilizes the missing details available from the neighboring frames , …, , …, and fuses them for super-resolving . After spatially aligning frames, missing details are extracted by separating differences between the aligned frames from missing details observed only in one or some of the frames. However, in MISR, the alignment of the frames is done without any concern for temporal smoothness, which is in stark contrast to VSR where the frames are typically aligned in temporal smooth order.

Traditional VSR methods upscale based on a single degradation model (usually bicubic interpolation) followed by reconstruction. This is sub-optimal and adds computational complexity

[shi2016real]. Recently, learning-based models that utilize convolutional neural networks (CNNs) have outperformed traditional approaches in terms of widely-accepted image reconstruction metrics such as peak signal to noise ratio (PSNR) and structural similarity (SSIM).

In some recent VSR methods that utilize CNNs, frames are concatenated [jo2018deep]

or fed into recurrent neural networks (RNNs)

[huang2015bidirectional] in temporal order, without explicit alignment. In other methods, the frames are aligned explicitly, using motion cues between temporal frames with the alignment modules [caballero2017real, liu2017robust, tao2017detail, sajjadi2018frame]. The latter set of methods generally render temporally smoother results compared to the methods with no explicit spatial alignment [liao2015video, huang2015bidirectional]. However, these VSR methods suffer from a number of problems. In the frame-concatenation approach [caballero2017real, liu2017robust, jo2018deep], many frames are processed simultaneously in the network, resulting in significantly higher network training times. With methods that use RNNs [sajjadi2018frame, tao2017detail, huang2015bidirectional]

, modeling both subtle and significant changes simultaneously (e.g., slow and quick motions of foreground objects) is a challenging task even if long short-term memory units (LSTMs) are deployed, which are designed for maintaining long-term temporal dependencies

[gers1999learning]. A crucial aspect of an effective VSR system is the ability to handle motion sequences, which are often integral components of videos [caballero2017real, makansi2017end].

The proposed method, iSeeBetter, is inspired by recurrent back-projection networks (RBPNs) [haris2019recurrent] which utilize “back-projection” as their underpinning approach, originally introduced in [irani1991improving, irani1993motion] for MISR. The basic concept behind back-projection is to iteratively calculate residual images as reconstruction error between a target image and a set of neighboring images. The residuals are then back-projected to the target image for improving super-resolution accuracy. The multiple residuals enable representation of subtle and significant differences between the target frame and its adjacent frames, thus exploiting temporal relationships between adjacent frames as shown in Fig. 1. Deep back-projection networks (DBPNs) [haris2018deep]

use back-projection to perform SISR using learning-based methods by estimating the output frame

using the corresponding frame. To this end, DBPN produces a high-resolution feature map that is iteratively refined through multiple up- and down-sampling layers. RBPN offers superior results by combining the benefits of the original MISR back-projection approach with DBPN. Specifically, RBPN uses the idea of iteratively refining HR feature maps from DBPN, but extracts missing details using neighboring video frames like the original back-projection technique [irani1991improving, irani1993motion]. This results in superior SR accuracy.

To mitigate the issue of a lack of finer texture details when super-resolving at large upscaling factors that is usually seen with CNNs [ledig2017photo], iSeeBetter utilizes GANs with a loss function that weighs adversarial loss, perceptual loss [ledig2017photo], mean square error (MSE)-based loss and total-variation (TV) loss [frsrgan]. Our approach combines the merits of RBPN and SRGAN [ledig2017photo] – it is based on RBPN as its generator and is complemented by SRGAN’s discriminator architecture, which is trained to differentiate between super-resolved images and original photo-realistic images. Blending these techniques yields iSeeBetter, a state-of-the-art system that is able to recover precise photo-realistic textures and motion-based scenes from heavily down-sampled videos.

Figure 1: Temporal relationships between adjacent frames.

Our contributions include the following key innovations.

Combining the state-of-the-art in SR: We propose a model that leverages two superior SR techniques – (i) RBPN, which is based on the idea of integrating SISR and MISR in a unified VSR framework using back-projection and, (ii) SRGAN, which is a framework capable of inferring photo-realistic natural images. RBPN enables iSeeBetter to extract details from neighboring frames, complemented by the generator-discriminator architecture in GANs which pushes iSeeBetter to generate more realistic and appealing frames while eliminating artifacts seen with traditional algorithms [wang2020deep]. iSeeBetter thus yields more than the sum of the benefits of RBPN and SRGAN.

“Optimizing” the loss function: Pixel-wise loss functions such as L1 loss, used in RBPN [haris2019recurrent], struggle to handle the uncertainty inherent in recovering lost high-frequency details such as complex textures that commonly exist in many videos. Minimizing MSE encourages finding pixel-wise averages of plausible solutions that are typically overly-smooth and thus have poor perceptual quality [mathieu2015deep, johnson2016perceptual, dosovitskiy2016generating, bruna2016super-resolution]. To address this, we adopt a four-fold (MSE, perceptual, adversarial, and TV) loss function for superior results. Similar to SRGAN [ledig2017photo], we utilize a loss function that optimizes perceptual quality by minimizing adversarial loss and MSE loss. Adversarial loss helps improve the “naturality” associated with the output image using the discriminator. On the other hand, MSE loss focuses on optimizing perceptual similarity instead of similarity in pixel space. Furthermore, we use a de-noising loss function called TV loss [aly2005image]. We carried out experiments comparing L1 loss with our four-fold loss and found significant improvements with the latter (cf. Section 4).

Extended evaluation protocol: To evaluate iSeeBetter, we used standard datasets: Vimeo90K [xue2019video], Vid4 [liu2011bayesian] and SPMCS [tao2017detail]. Since Vid4 and SPMCS lack significant motion sequences, we included Vimeo90K, a dataset containing various types of motion. This enabled us to conduct a more holistic evaluation of the strengths and weaknesses of iSeeBetter. To make iSeeBetter more robust and enable it to handle real-world videos, we expanded the spectrum of data diversity and wrote scripts to collect additional data from YouTube. As a result, we augmented our dataset to about 170,000 clips.

User-friendly infrastructure: We built several useful tools to download and structure datasets, visualize temporal profiles of intermediate blocks and the output, and run predefined benchmark sequences on a trained model to be able to iterate on different models quickly. In addition, we built a video-to-frames tool to directly input videos to iSeeBetter, rather than frames. We also ensured our script infrastructure is flexible (such that it supports a myriad of options) and can be easily leveraged. The code and pre-trained models are available at https://iseebetter.amanchadha.com.

width=0.85center Dataset Resolution # of Clips # of Frames/Clip # of Frames Vimeo90K 448 256 13,100 7 91,701 SPMCS 240 135 30 31 930 Vid4 (720 576 3) 2, (704 576 3) 2 4 41, 34, 49, 47 684 Augmented 960 720 7,000  110 77,000 Total - 46,034 - 170,315

Table 1: Datasets used for training and evaluation

2 Related work

Since the seminal work by Tsai on image registration [tsai1984multiframe]

two decades ago, many SR techniques based on various underlying principles have been proposed. Initial methods included spatial or frequency domain signal processing, statistical models and interpolation approaches

[yang2010image]. In this section, we focus our discussion on learning-based methods which have emerged as superior VSR techniques compared to traditional statistical methods.

2.1 Deep SISR

First introduced by SRCNN [dong2015image], deep SISR required a predefined up-sampling operator. Further improvements in this field include better up-sampling layers [shi2016real], residual learning [tai2017image], back-projection [haris2018deep], recursive layers [kim2016deeply], and progressive up-sampling [lai2017deep]. A significant milestone in SR research was the introduction of a GAN-powered SR approach [ledig2017photo], which achieved state-of-the-art performance.

2.2 Deep VSR

Deep VSR can be divided into five types based on the approach to preserving temporal information.

(a) Temporal Concatenation. The most popular approach to retain temporal information in VSR is concatenating multiple frames [kappeler2016video, caballero2017real, jo2018deep, liao2015video]. This approach can be seen as an extension of SISR to accept multiple input images. However, this approach fails to represent multiple motion regimes within a single input sequence since the input frames are simply concatenated together.

(b) Temporal Aggregation. To address the dynamic motion problem in VSR, [liu2017robust] proposed multiple SR inferences which work on different motion regimes. The final layer aggregates the outputs of all branches to construct SR frame. However, this approach still concatenates many input frames, resulting in lengthy convergence during global optimization.

(c) Recurrent Networks. RNNs deal with temporal inputs and/or outputs and have been deployed in a myriad of applications ranging from video captioning [johnson2016densecap, mao2014deep, yu2016video], video summarization [donahue2015long, venugopalan2014translating], and VSR [tao2017detail, huang2015bidirectional, sajjadi2018frame]. Two types of RNNs have been used for VSR. A many-to-one architecture is used in [huang2015bidirectional, tao2017detail] where a sequence of LR frames is mapped to a single target HR frame. A many-to-many RNN has recently been used by [sajjadi2018frame] where an optical flow network to accepts and , which is fed to an SR network along with . This approach was first proposed by [huang2015bidirectional] using bidirectional RNNs. However, the network has a small network capacity and has no frame alignment step. Further improvement is proposed by [tao2017detail] using a motion compensation module and a ConvLSTM layer [shiconvolutional].

(d) Optical Flow-Based Methods. The above methods estimate a single HR frame by combining a batch of LR frames and are thus computationally expensive. They often result in unwanted flickering artifacts in the output frames [frsrgan]. To address this, [sajjadi2018frame] proposed a method that utilizes a network trained on estimating the optical flow along with the SR network. Optical flow methods allow estimation of the trajectories of moving objects, thereby assisting in VSR. [kappeler2016video] warp video frames and onto using the optical flow method of [drulea2011total], concatenate the three frames, and pass them through a CNN that produces the output frame . [caballero2017real] follow the same approach but replace the optical flow model with a trainable motion compensation network.

(e) Pre-Training then Fine-Tuning v/s End-to-End Training. While most of the above-mentioned methods are end-to-end trainable, certain approaches first pre-train each component before fine-tuning the system as a whole in a final step [caballero2017real, tao2017detail, liu2017robust].

Our approach is a combination of (i) an RNN-based optical flow method that preserves spatio-temporal information in the current and adjacent frames as the generator and, (ii) a discriminator that is adept at ensuring the generated SR frame offers superior fidelity.

3 Methods

3.1 Datasets

To train iSeeBetter, we amalgamated diverse datasets with differing video lengths, resolutions, motion sequences, and number of clips. Tab. 1 presents a summary of the datasets used. When training our model, we generated the corresponding LR frame for each HR input frame by performing 4

down-sampling using bicubic interpolation. We thus perform self-supervised learning by automatically generating the input-output pairs for training without any human intervention. To further extend our dataset, we wrote scripts to collect additional data from YouTube. The dataset was shuffled for training and testing. Our training/validation/test split was 80%/10%/10%.

3.2 Network architecture

Fig. 2 shows the iSeeBetter architecture that consists of RBPN [haris2019recurrent] and SRGAN [ledig2017photo] as its generator and discriminator respectively. Tab. 2 shows our notational convention. RBPN has two approaches that extract missing details from different sources: SISR and MISR. Fig. 3 shows the horizontal flow (represented by blue arrows in Fig. 2) that enlarges using SISR. Fig. 4 shows the vertical flow (represented by red arrows in Fig. 2) which is based on MISR that computes residual features from (i) a pair of and its neighboring frames (, …, ) coupled with, (ii) the pre-computed dense motion flow maps (, …, ).

input high resolution image
low resolution image (derived from )
optical flow output

residual features extracted from (

, , )
estimated HR output
Table 2: Adopted notation

At each projection step, RBPN observes the missing details from and extracts residual features from neighboring frames to recover details. The convolutional layers that feed the projection modules in Fig. 2 thus serve as initial feature extractors. Within the projection modules, RBPN utilizes a recurrent encoder-decoder mechanism for fusing details extracted from adjacent frames in SISR and MISR and incorporates them into the estimated frame through back-projection. The convolutional layer that operates on the concatenated output from all the projection modules is responsible for generating . Once is synthesized, it is sent over to the discriminator (shown in Fig. 5) to validate its “authenticity”.

Figure 2: Overview of iSeeBetter

3.3 Loss functions

The perceptual image quality of the resulting SR image is dependent on the choice of the loss function. To evaluate the quality of an image, MSE is the most commonly used loss function in a wide variety of state-of-the-art SR approaches, which aims to improve the PSNR of an image [hore2010image]. While optimizing MSE during training improves PSNR and SSIM, these metrics may not capture fine details in the image leading to misrepresentation of perceptual quality [ledig2017photo]. The ability of MSE to capture intricate texture details based on pixel-wise frame differences is very limited, and can cause the resulting video frames to be overly-smooth [cheng2012fast]. In a series of experiments, it was found that even manually distorted images had an MSE score comparable to the original image [wang2002universal]. To address this, iSeeBetter uses a four-fold (MSE, perceptual, adversarial, and TV) loss instead of solely relying on pixel-wise MSE loss. We weigh these losses together as a final evaluation standard for training iSeeBetter, thus taking into account both pixel-wise similarities and high-level features. Fig. 6 shows the individual components of the iSeeBetter loss function.

Figure 3: DBPN [haris2018deep] architecture for SISR, where we perform up-down-up sampling using 8

8 kernels with a stride of 4 and padding of 2. Similar to the ResNet architecture above, the DBPN network also uses Parametric ReLUs

[he2015delving]

as its activation functions.

Figure 4: ResNet architecture for MISR that is composed of three tiles of five blocks where each block consists of two convolutional layers with 3 3 kernels, a stride of 1 and padding of 1. The network uses Parametric ReLUs [he2015delving] for its activations.
Figure 5: Discriminator Architecture from SRGAN [ledig2017photo]. The discriminator uses Leaky ReLUs for computing its activations.
Figure 6: The MSE, perceptual, adversarial, and TV loss components of the iSeeBetter loss function.

3.3.1 MSE loss

We use pixel-wise MSE loss (also called content loss [ledig2017photo]) for the estimated frame against the ground truth .

(1)

where, is the estimated frame . and represent the width and height of the frames respectively.

3.3.2 Perceptual loss

[gatys2015texture, bruna2016super-resolution] introduced a new loss function called perceptual loss, also used in [johnson2016perceptual, ledig2017photo], which focuses on perceptual similarity instead of similarity in pixel space. Perceptual loss relies on features extracted from the activation layers of the pre-trained VGG-19 network in [simonyan2014very], instead of low-level pixel-wise error measures. We define perceptual loss as the euclidean distance between the feature representations of the estimated SR image and the ground truth .

(2)

where, denotes the feature map obtained by the convolution (after activation) before the maxpooling layer in the VGG-19 network. and are the dimensions of the respective feature maps in the VGG-19 network.

width=center Dataset Clip Name Flow Bicubic DBPN [haris2018deep] B + T [liu2017robust] DRDVSR [tao2017detail] FRVSR [sajjadi2018frame] RBPN/6-PF [haris2019recurrent] VSR-DUF [jo2018deep] iSeeBetter Vid4 Calendar 1.14 19.82/0.554 22.19/0.714 21.66/0.704 22.18/0.746 - 23.99/0.807 24.09/0.813 24.13/0.817 City 1.63 24.93/0.586 26.01/0.684 26.45/0.720 26.98/0.755 - 27.73/0.803 28.26/0.833 28.34/0.841 Foliage 1.48 23.42/0.575 24.67/0.662 24.98/0.698 25.42/0.720 - 26.22/0.757 26.38/0.771 26.57/0.773 Walk 1.44 26.03/0.802 28.61/0.870 28.26/0.859 28.92/0.875 - 30.70/0.909 30.50/0.912 30.68/0.908 Average 1.42 23.53/0.629 25.37/0.737 25.34/0.745 25.88/0.774 26.69/0.822 27.12/0.818 27.31/0.832 27.43/0.835 Vimeo90K Fast Motion 8.30 34.05/0.902 37.46/0.944 - - - 40.03/0.960 37.49/0.949 40.17/0.971

Table 3: PSNR/SSIM evaluation of state-of-the-art VSR algorithms using Vid4 and Vimeo90K for 4 upscaling. Bold numbers indicate best performance.

3.3.3 Adversarial loss

We use the generative component of iSeeBetter as the adversarial loss to limit model “fantasy”, thus improving the “naturality” associated with the super-resolved image. Adversarial loss is defined as:

(3)

where,

is the discriminator’s output probability that the reconstructed image

is a real HR image. We minimize instead of for better gradient behavior [goodfellow2014generative].

3.3.4 Total-Variation loss

TV loss was introduced as a loss function in the domain of SR by [aly2005image]. It is defined as the sum of the absolute differences between neighboring pixels in the horizontal and vertical directions [wang2020deep]. Since TV loss measures noise in the input, minimizing it as part of our overall loss objective helps de-noise the output SR image and thus encourages spatial smoothness. TV loss is defined as follows:

(4)

3.3.5 Loss formulation

We define our overall loss objective for each frame as the weighted sum of the MSE, adversarial, perceptual, and TV loss components:

(5)

where, , , , are weights set as 1, 6  10, 10 and 2  10 respectively [hany2019hands].
The discriminator loss for each frame is as follows:

(6)

The total loss of an input sample is the average loss of all frames.

(7)

4 Experimental evaluation

To train the model, we used an Amazon EC2 P3.2xLarge instance with an NVIDIA Tesla V100 GPU with 16GB VRAM, 8 vCPUs and 64GB of host memory. We used the hyperparameters from RBPN and SRGAN. Tab.

3 compares iSeeBetter with six state-of-the-art VSR algorithms: DBPN [haris2018deep], B + T [liu2017robust], DRDVSR [tao2017detail], FRVSR [sajjadi2018frame], VSR-DUF [jo2018deep] and RBPN/6-PF [haris2019recurrent]. Tab. 4 offers a visual analysis of VSR-DUF and iSeeBetter. Tab. 5 shows ablation studies to assess the impact of using a generator-discriminator architecture and the four-fold loss as design decisions.

Dataset Clip Name VSR-DUF [jo2018deep] iSeeBetter Ground Truth
Vid4 Calendar valign=m
valign=m
valign=m
SPMCS Pagoda valign=m
valign=m
valign=m
Vimeo90K Motion valign=m
valign=m
valign=m
Table 4: Visually inspecting examples from Vid4, SPMCS and Vimeo-90k comparing VSR-DUF and iSeeBetter. We chose VSR-DUF for comparison because it was the state-of-the-art at the time of publication. Top row: fine-grained textual features that help with readability; middle row: intricate high-frequency image details; bottom row: camera panning motion.

width=0.9center iSeeBetter Config PSNR RBPN baseline with L1 loss 27.73 RBPN baseline with MSE loss 27.77 RBPN generator + SRGAN discriminator with adversarial loss 28.08 RBPN generator + SRGAN discriminator with adversarial + MSE loss 28.12 RBPN generator + SRGAN discriminator with adversarial + MSE + perceptual loss 28.27 RBPN generator + SRGAN discriminator with adversarial + MSE + perceptual + TV loss 28.34

Table 5: Ablation analysis for iSeeBetter using the “City” clip from Vid4.

5 Conclusions and future work

We proposed iSeeBetter, a novel spatio-temporal approach to VSR that uses recurrent-generative back-projection networks. iSeeBetter couples the virtues of RBPN and SRGAN. RBPN enables iSeeBetter to generate superior SR images by combining spatial and temporal information from the input and neighboring frames. In addition, SRGAN’s discriminator architecture fosters generation of photo-realistic frames. We used a four-fold loss function that emphasizes perceptual quality. Furthermore, we proposed a new evaluation protocol for video SR by collating diverse datasets. With extensive experiments, we assessed the role played by various design choices in the ultimate performance of iSeeBetter, and demonstrated that on a vast majority of test video sequences, iSeeBetter advances the state-of-the-art.

To improve iSeeBetter, a couple of ideas could be explored. In visual imagery the foreground recieves much more attention than the background since it typically includes subjects such as humans. To improve perceptual quality, we can segment the foreground and background, and make iSeeBetter perform “adaptive VSR” by utilizing different policies for the foreground and background. For instance, we could adopt a wider span of the number of frames to extract details from for the foreground compared to the background. Another idea is to decompose a video sequence into scenes on the basis of frame-similarity and make iSeeBetter assign weights to adjacent frames based on which scene they belong to. Adjacent frames from a different scene can be weighed lower compared to frames from the same scene, thereby making iSeeBetter focus on extracting details from frames within the same scene – à la the concept of attention applied to VSR. The authors would like to thank Andrew Ng’s lab at Stanford University for their guidance on this project. In particular, the authors express their gratitude to Mohamed El-Geish for the idea-inducing brainstorming sessions throughout the project.

References