Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring

11/22/2022
by   Lingshun Kong, et al.
0

We present an effective and efficient method that explores the properties of Transformers in the frequency domain for high-quality image deblurring. Our method is motivated by the convolution theorem that the correlation or convolution of two signals in the spatial domain is equivalent to an element-wise product of them in the frequency domain. This inspires us to develop an efficient frequency domain-based self-attention solver (FSAS) to estimate the scaled dot-product attention by an element-wise product operation instead of the matrix multiplication in the spatial domain. In addition, we note that simply using the naive feed-forward network (FFN) in Transformers does not generate good deblurred results. To overcome this problem, we propose a simple yet effective discriminative frequency domain-based FFN (DFFN), where we introduce a gated mechanism in the FFN based on the Joint Photographic Experts Group (JPEG) compression algorithm to discriminatively determine which low- and high-frequency information of the features should be preserved for latent clear image restoration. We formulate the proposed FSAS and DFFN into an asymmetrical network based on an encoder and decoder architecture, where the FSAS is only used in the decoder module for better image deblurring. Experimental results show that the proposed method performs favorably against the state-of-the-art approaches. Code will be available at <https://github.com/kkkls/FFTformer>.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

research
03/21/2023

Learning A Sparse Transformer Network for Effective Image Deraining

Transformers-based methods have achieved significant performance in imag...
research
03/13/2023

SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency

This work presents an effective depth-consistency self-prompt Transforme...
research
06/06/2021

Uformer: A General U-Shaped Transformer for Image Restoration

In this paper, we present Uformer, an effective and efficient Transforme...
research
07/28/2022

HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions

Recent progress in vision Transformers exhibits great success in various...
research
07/01/2021

Global Filter Networks for Image Classification

Recent advances in self-attention and pure multi-layer perceptrons (MLP)...
research
05/26/2022

Fast Vision Transformers with HiLo Attention

Vision Transformers (ViTs) have triggered the most recent and significan...
research
03/20/2023

Polynomial Implicit Neural Representations For Large Diverse Datasets

Implicit neural representations (INR) have gained significant popularity...

Please sign up or login with your details

Forgot password? Click here to reset