Learning A Sparse Transformer Network for Effective Image Deraining

03/21/2023
by   Xiang Chen, et al.
1

Transformers-based methods have achieved significant performance in image deraining as they can model the non-local information which is vital for high-quality image reconstruction. In this paper, we find that most existing Transformers usually use all similarities of the tokens from the query-key pairs for the feature aggregation. However, if the tokens from the query are different from those of the key, the self-attention values estimated from these tokens also involve in feature aggregation, which accordingly interferes with the clear image restoration. To overcome this problem, we propose an effective DeRaining network, Sparse Transformer (DRSformer) that can adaptively keep the most useful self-attention values for feature aggregation so that the aggregated features better facilitate high-quality image reconstruction. Specifically, we develop a learnable top-k selection operator to adaptively retain the most crucial attention scores from the keys for each query for better feature aggregation. Simultaneously, as the naive feed-forward network in Transformers does not model the multi-scale information that is important for latent clear image restoration, we develop an effective mixed-scale feed-forward network to generate better features for image deraining. To learn an enriched set of hybrid features, which combines local context from CNN operators, we equip our model with mixture of experts feature compensator to present a cooperation refinement deraining scheme. Extensive experimental results on the commonly used benchmarks demonstrate that the proposed method achieves favorable performance against state-of-the-art approaches. The source code and trained models are available at https://github.com/cschenxiang/DRSformer.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
08/15/2023

Learning Image Deraining Transformer Network with Dynamic Dual Self-Attention

Recently, Transformer-based architecture has been introduced into single...
research
11/22/2022

Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring

We present an effective and efficient method that explores the propertie...
research
09/06/2023

Prompt-based All-in-One Image Restoration using CNNs and Transformer

Image restoration aims to recover the high-quality images from their deg...
research
03/13/2023

SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency

This work presents an effective depth-consistency self-prompt Transforme...
research
12/17/2021

Towards End-to-End Image Compression and Analysis with Transformers

We propose an end-to-end image compression and analysis model with Trans...
research
01/12/2020

Concurrently Extrapolating and Interpolating Networks for Continuous Model Generation

Most deep image smoothing operators are always trained repetitively when...
research
09/28/2022

Motion Transformer for Unsupervised Image Animation

Image animation aims to animate a source image by using motion learned f...

Please sign up or login with your details

Forgot password? Click here to reset