Image Fusion Transformer

by   Vibashan VS, et al.

In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information. In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion. Specifically, CNN-based methods perform image fusion by fusing local features. However, they do not consider long-range dependencies that are present in the image. Transformer-based models are designed to overcome this by modeling the long-range dependencies with the help of self-attention mechanism. This motivates us to propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy that attends to both local and long-range information (or global context). The proposed method follows a two-stage training approach. In the first stage, we train an auto-encoder to extract deep features at multiple scales. In the second stage, multi-scale features are fused using a Spatio-Transformer (ST) fusion strategy. The ST fusion blocks are comprised of a CNN and a transformer branch which capture local and long-range features, respectively. Extensive experiments on multiple benchmark datasets show that the proposed method performs better than many competitive fusion algorithms. Furthermore, we show the effectiveness of the proposed ST fusion strategy with an ablation analysis. The source code is available at:


page 1

page 4


MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet

U-Nets have achieved tremendous success in medical image segmentation. N...

HiFuse: Hierarchical Multi-Scale Feature Fusion Network for Medical Image Classification

Medical image classification has developed rapidly under the impetus of ...

Ghost-free High Dynamic Range Imaging with Context-aware Transformer

High dynamic range (HDR) deghosting algorithms aim to generate ghost-fre...

EDTER: Edge Detection with Transformer

Convolutional neural networks have made significant progresses in edge d...

CSformer: Bridging Convolution and Transformer for Compressive Sensing

Convolution neural networks (CNNs) have succeeded in compressive image s...

Eliminating Gradient Conflict in Reference-based Line-art Colorization

Reference-based line-art colorization is a challenging task in computer ...

TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework using Self-Supervised Multi-Task Learning

In this paper, we propose TransMEF, a transformer-based multi-exposure i...