Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

08/05/2023
by   Yong Liu, et al.
0

Recent years have witnessed a few attempts of vision transformers for single image super-resolution (SISR). Since the high resolution of intermediate features in SISR models increases memory and computational requirements, efficient SISR transformers are more favored. Based on some popular transformer backbone, many methods have explored reasonable schemes to reduce the computational complexity of the self-attention module while achieving impressive performance. However, these methods only focus on the performance on the training platform (e.g., Pytorch/Tensorflow) without further optimization for the deployment platform (e.g., TensorRT). Therefore, they inevitably contain some redundant operators, posing challenges for subsequent deployment in real-world applications. In this paper, we propose a deployment-friendly transformer unit, namely UFONE (i.e., UnFolding ONce is Enough), to alleviate these problems. In each UFONE, we introduce an Inner-patch Transformer Layer (ITL) to efficiently reconstruct the local structural information from patches and a Spatial-Aware Layer (SAL) to exploit the long-range dependencies between patches. Based on UFONE, we propose a Deployment-friendly Inner-patch Transformer Network (DITN) for the SISR task, which can achieve favorable performance with low latency and memory usage on both training and deployment platforms. Furthermore, to further boost the deployment efficiency of the proposed DITN on TensorRT, we also provide an efficient substitution for layer normalization and propose a fusion optimization strategy for specific operators. Extensive experiments show that our models can achieve competitive results in terms of qualitative and quantitative performance with high deployment efficiency. Code is available at <https://github.com/yongliuy/DITN>.

READ FULL TEXT

page 3

page 6

page 12

page 13

page 14

page 15

research
05/09/2022

Activating More Pixels in Image Super-Resolution Transformer

Transformer-based methods have shown impressive performance in low-level...
research
03/24/2023

PFT-SSR: Parallax Fusion Transformer for Stereo Image Super-Resolution

Stereo image super-resolution aims to boost the performance of image sup...
research
05/19/2023

Efficient Mixed Transformer for Single Image Super-Resolution

Recently, Transformer-based methods have achieved impressive results in ...
research
03/13/2022

Efficient Long-Range Attention Network for Image Super-resolution

Recently, transformer-based methods have demonstrated impressive results...
research
11/30/2022

From Coarse to Fine: Hierarchical Pixel Integration for Lightweight Image Super-Resolution

Image super-resolution (SR) serves as a fundamental tool for the process...
research
02/07/2023

OSRT: Omnidirectional Image Super-Resolution with Distortion-aware Transformer

Omnidirectional images (ODIs) have obtained lots of research interest fo...
research
03/21/2022

ARM: Any-Time Super-Resolution Method

This paper proposes an Any-time super-Resolution Method (ARM) to tackle ...

Please sign up or login with your details

Forgot password? Click here to reset