MB-TaylorFormer: Multi-branch Efficient Transformer Expanded by Taylor Formula for Image Dehazing

08/27/2023
by   Yuwei Qiu, et al.
0

In recent years, Transformer networks are beginning to replace pure convolutional neural networks (CNNs) in the field of computer vision due to their global receptive field and adaptability to input. However, the quadratic computational complexity of softmax-attention limits the wide application in image dehazing task, especially for high-resolution images. To address this issue, we propose a new Transformer variant, which applies the Taylor expansion to approximate the softmax-attention and achieves linear computational complexity. A multi-scale attention refinement module is proposed as a complement to correct the error of the Taylor expansion. Furthermore, we introduce a multi-branch architecture with multi-scale patch embedding to the proposed Transformer, which embeds features by overlapping deformable convolution of different scales. The design of multi-scale patch embedding is based on three key ideas: 1) various sizes of the receptive field; 2) multi-level semantic information; 3) flexible shapes of the receptive field. Our model, named Multi-branch Transformer expanded by Taylor formula (MB-TaylorFormer), can embed coarse to fine features more flexibly at the patch embedding stage and capture long-distance pixel interactions with limited computational cost. Experimental results on several dehazing benchmarks show that MB-TaylorFormer achieves state-of-the-art (SOTA) performance with a light computational burden. The source code and pre-trained models are available at https://github.com/FVL2020/ICCV-2023-MB-TaylorFormer.

READ FULL TEXT

page 4

page 7

page 9

research
02/03/2023

DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition

As a de facto solution, the vanilla Vision Transformers (ViTs) are encou...
research
09/21/2022

SDA-xNet: Selective Depth Attention Networks for Adaptive Multi-scale Feature Representation

Existing multi-scale solutions lead to a risk of just increasing the rec...
research
06/12/2021

Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks

Convolutional networks (ConvNets) have shown impressive capability to so...
research
06/05/2022

MPANet: Multi-Patch Attention For Infrared Small Target object Detection

Infrared small target detection (ISTD) has attracted widespread attentio...
research
02/08/2021

TransReID: Transformer-based Object Re-Identification

In this paper, we explore the Vision Transformer (ViT), a pure transform...
research
12/27/2021

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer

Pancreatic cancer is one of the most malignant cancers in the world, whi...
research
05/28/2021

ResT: An Efficient Transformer for Visual Recognition

This paper presents an efficient multi-scale vision Transformer, called ...

Please sign up or login with your details

Forgot password? Click here to reset