Dual-former: Hybrid Self-attention Transformer for Efficient Image Restoration

10/03/2022
by   Sixiang Chen, et al.
0

Recently, image restoration transformers have achieved comparable performance with previous state-of-the-art CNNs. However, how to efficiently leverage such architectures remains an open problem. In this work, we present Dual-former whose critical insight is to combine the powerful global modeling ability of self-attention modules and the local modeling ability of convolutions in an overall architecture. With convolution-based Local Feature Extraction modules equipped in the encoder and the decoder, we only adopt a novel Hybrid Transformer Block in the latent layer to model the long-distance dependence in spatial dimensions and handle the uneven distribution between channels. Such a design eliminates the substantial computational complexity in previous image restoration transformers and achieves superior performance on multiple image restoration tasks. Experiments demonstrate that Dual-former achieves a 1.91dB gain over the state-of-the-art MAXIM method on the Indoor dataset for single image dehazing while consuming only 4.2 deraining, it exceeds the SOTA method by 0.1dB PSNR on the average results of five datasets with only 21.5 the latest desnowing method on various datasets, with fewer parameters.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset