Video Super-Resolution Transformer

06/12/2021
by   Jiezhang Cao, et al.
8

Video super-resolution (VSR), with the aim to restore a high-resolution video from its corresponding low-resolution version, is a spatial-temporal sequence prediction problem. Recently, Transformer has been gaining popularity due to its parallel computing ability for sequence-to-sequence modeling. Thus, it seems to be straightforward to apply the vision Transformer to solve VSR. However, the typical block design of Transformer with a fully connected self-attention layer and a token-wise feed-forward layer does not fit well for VSR due to the following two reasons. First, the fully connected self-attention layer neglects to exploit the data locality because this layer relies on linear layers to compute attention maps. Second, the token-wise feed-forward layer lacks the feature alignment which is important for VSR since this layer independently processes each of the input token embeddings without any interaction among them. In this paper, we make the first attempt to adapt Transformer for VSR. Specifically, to tackle the first issue, we present a spatial-temporal convolutional self-attention layer with a theoretical understanding to exploit the locality information. For the second issue, we design a bidirectional optical flow-based feed-forward layer to discover the correlations across different video frames and also align features. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our proposed method. The code will be available at https://github.com/caojiezhang/VSR-Transformer.

READ FULL TEXT

page 7

page 9

page 18

page 19

page 20

research
04/21/2022

A New Dataset and Transformer for Stereoscopic Video Super-Resolution

Stereo video super-resolution (SVSR) aims to enhance the spatial resolut...
research
08/07/2023

Dual Aggregation Transformer for Image Super-Resolution

Transformer has recently gained considerable popularity in low-level vis...
research
06/06/2019

Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View

The Transformer architecture is widely used in natural language processi...
research
04/12/2021

LocalViT: Bringing Locality to Vision Transformers

We study how to introduce locality mechanisms into vision transformers. ...
research
04/17/2021

Higher Order Recurrent Space-Time Transformer

Endowing visual agents with predictive capability is a key step towards ...
research
06/19/2023

Vision Transformer with Attention Map Hallucination and FFN Compaction

Vision Transformer(ViT) is now dominating many vision tasks. The drawbac...
research
11/21/2022

FlowLens: Seeing Beyond the FoV via Flow-guided Clip-Recurrent Transformer

Limited by hardware cost and system size, camera's Field-of-View (FoV) i...

Please sign up or login with your details

Forgot password? Click here to reset