Recursive Generalization Transformer for Image Super-Resolution

03/11/2023
by   Zheng Chen, et al.
0

Transformer architectures have exhibited remarkable performance in image super-resolution (SR). Since the quadratic computational complexity of the self-attention (SA) in Transformer, existing methods tend to adopt SA in a local region to reduce overheads. However, the local design restricts the global context exploitation, which is critical for accurate image reconstruction. In this work, we propose the Recursive Generalization Transformer (RGT) for image SR, which can capture global spatial information and is suitable for high-resolution images. Specifically, we propose the recursive-generalization self-attention (RG-SA). It recursively aggregates input features into representative feature maps, and then utilizes cross-attention to extract global information. Meanwhile, the channel dimensions of attention matrices (query, key, and value) are further scaled for a better trade-off between computational overheads and performance. Furthermore, we combine the RG-SA with local self-attention to enhance the exploitation of the global context, and propose the hybrid adaptive integration (HAI) for module integration. The HAI allows the direct and effective fusion between features at different levels (local or global). Extensive experiments demonstrate that our RGT outperforms recent state-of-the-art methods.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

research
08/07/2023

Dual Aggregation Transformer for Image Super-Resolution

Transformer has recently gained considerable popularity in low-level vis...
research
07/06/2023

Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network for Remote Sensing Image Super-Resolution

Remote sensing image super-resolution (RSISR) plays a vital role in enha...
research
07/14/2023

MaxSR: Image Super-Resolution Using Improved MaxViT

While transformer models have been demonstrated to be effective for natu...
research
03/13/2022

Efficient Long-Range Attention Network for Image Super-resolution

Recently, transformer-based methods have demonstrated impressive results...
research
04/08/2022

Points to Patches: Enabling the Use of Self-Attention for 3D Shape Recognition

While the Transformer architecture has become ubiquitous in the machine ...
research
11/21/2022

N-Gram in Swin Transformers for Efficient Lightweight Image Super-Resolution

While some studies have proven that Swin Transformer (SwinT) with window...
research
06/04/2023

ESTISR: Adapting Efficient Scene Text Image Super-resolution for Real-Scenes

While scene text image super-resolution (STISR) has yielded remarkable i...

Please sign up or login with your details

Forgot password? Click here to reset