Accurate Image Restoration with Attention Retractable Transformer

10/04/2022
by   Jiale Zhang, et al.
15

Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.

READ FULL TEXT

page 8

page 9

page 18

page 19

page 20

research
11/24/2022

Cross Aggregation Transformer for Image Restoration

Recently, Transformer architecture has been introduced into image restor...
research
05/09/2022

Activating More Pixels in Image Super-Resolution Transformer

Transformer-based methods have shown impressive performance in low-level...
research
01/03/2022

Vision Transformer with Deformable Attention

Transformers have recently shown superior performances on various vision...
research
08/15/2023

Learning Image Deraining Transformer Network with Dynamic Dual Self-Attention

Recently, Transformer-based architecture has been introduced into single...
research
05/09/2023

LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

In computer vision, the performance of deep neural networks (DNNs) is hi...
research
08/28/2023

Attention Visualizer Package: Revealing Word Importance for Deeper Insight into Encoder-Only Transformer Models

This report introduces the Attention Visualizer package, which is crafte...
research
05/19/2023

RAMiT: Reciprocal Attention Mixing Transformer for Lightweight Image Restoration

Although many recent works have made advancements in the image restorati...

Please sign up or login with your details

Forgot password? Click here to reset