Learning Spatio-Temporal Downsampling for Effective Video Upscaling

03/15/2022
by   Xiaoyu Xiang, et al.
7

Downsampling is one of the most basic image processing operations. Improper spatio-temporal downsampling applied on videos can cause aliasing issues such as moiré patterns in space and the wagon-wheel effect in time. Consequently, the inverse task of upscaling a low-resolution, low frame-rate video in space and time becomes a challenging ill-posed problem due to information loss and aliasing artifacts. In this paper, we aim to solve the space-time aliasing problem by learning a spatio-temporal downsampler. Towards this goal, we propose a neural network framework that jointly learns spatio-temporal downsampling and upsampling. It enables the downsampler to retain the key patterns of the original video and maximizes the reconstruction performance of the upsampler. To make the downsamping results compatible with popular image and video storage formats, the downsampling results are encoded to uint8 with a differentiable quantization layer. To fully utilize the space-time correspondences, we propose two novel modules for explicit temporal propagation and space-time feature rearrangement. Experimental results show that our proposed method significantly boosts the space-time reconstruction quality by preserving spatial textures and motion patterns in both downsampling and upscaling. Moreover, our framework enables a variety of applications, including arbitrary video resampling, blurry frame reconstruction, and efficient video storage.

READ FULL TEXT

page 1

page 8

page 9

page 15

page 18

page 19

page 20

page 21

research
10/15/2015

Beyond Spatial Pyramid Matching: Space-time Extended Descriptor for Action Recognition

We address the problem of generating video features for action recogniti...
research
06/16/2023

Learning Space-Time Semantic Correspondences

We propose a new task of space-time semantic correspondence prediction i...
research
09/05/2018

A Bayesian framework for the analog reconstruction of kymographs from fluorescence microscopy data

Kymographs are widely used to represent and anal- yse spatio-temporal dy...
research
11/26/2018

Evolving Space-Time Neural Architectures for Videos

In this paper, we present a new method for evolving video CNN models to ...
research
04/11/2019

Recurrent Space-time Graphs for Video Understanding

Visual learning in the space-time domain remains a very challenging prob...
research
01/01/2022

Dynamic Scene Video Deblurring using Non-Local Attention

This paper tackles the challenging problem of video deblurring. Most of ...

Please sign up or login with your details

Forgot password? Click here to reset