SwinFuse: A Residual Swin Transformer Fusion Network for Infrared and Visible Images

04/25/2022
by   Zhishe Wang, et al.
NetEase, Inc
6

The existing deep learning fusion methods mainly concentrate on the convolutional neural networks, and few attempts are made with transformer. Meanwhile, the convolutional operation is a content-independent interaction between the image and convolution kernel, which may lose some important contexts and further limit fusion performance. Towards this end, we present a simple and strong fusion baseline for infrared and visible images, namely Residual Swin Transformer Fusion Network, termed as SwinFuse. Our SwinFuse includes three parts: the global feature extraction, fusion layer and feature reconstruction. In particular, we build a fully attentional feature encoding backbone to model the long-range dependency, which is a pure transformer network and has a stronger representation ability compared with the convolutional neural networks. Moreover, we design a novel feature fusion strategy based on L_1-norm for sequence matrices, and measure the corresponding activity levels from row and column vector dimensions, which can well retain competitive infrared brightness and distinct visible details. Finally, we testify our SwinFuse with nine state-of-the-art traditional and deep learning methods on three different datasets through subjective observations and objective comparisons, and the experimental results manifest that the proposed SwinFuse obtains surprising fusion performance with strong generalization ability and competitive computational efficiency. The code will be available at https://github.com/Zhishe-Wang/SwinFuse.

READ FULL TEXT VIEW PDF

page 2

page 4

page 6

page 7

page 8

page 9

page 10

page 12

04/23/2018

DenseFuse: A Fusion Approach to Infrared and Visible Images

In this paper, we present a novel deep learning architecture for infrare...
03/07/2021

RFN-Nest: An end-to-end residual fusion network for infrared and visible images

In the image fusion field, the design of deep learning-based fusion meth...
01/25/2022

TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network

The end-to-end image fusion framework has achieved promising performance...
03/29/2022

Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning

The existing generative adversarial fusion methods generally concatenate...
12/02/2021

SwinTrack: A Simple and Strong Baseline for Transformer Tracking

Transformer has recently demonstrated clear potential in improving visua...
06/19/2018

Infrared and Visible Image Fusion with ResNet and zero-phase component analysis

In image fusion task, feature extraction and processing are keys for fus...
04/29/2022

Noise-reducing attention cross fusion learning transformer for histological image classification of osteosarcoma

The degree of malignancy of osteosarcoma and its tendency to metastasize...

I Introduction

Infrared sensor can detect the hidden or camouflage targets by capturing the thermal radiation energy, and has the strong anti-interference ability with all-time and all-weather, but it cannot acquire typical background details and structural texture. On the contrary, visible sensor can perceive the scene details, color information and texture characteristics by receiving the reflected light, but it fails to distinguish the prominent targets, and is easily affected by the weather and light variations. Considering their complementarity of imaging mechanisms and working conditions, image fusion technology aims to transform their complementary features into a synthesized image through a specific algorithm. The obtained fusion image is more in line with human visual perception, and for other subsequent computer visual tasks, such as object detection and object recognition, which can achieve more accurate decisions than a single sensor. Therefore, infrared and visible image fusion cooperates with these two sensors to generate a higher quality result, and has important applications in many fields, such as person re-recognition [1], object fusion tracking [2] and salient object detection [3] and so on.

Generally, the core problems of infrared and visible image fusion are how to effectively extract and combine their complementary features. The traditional methods, such as multi-scale transform [4], sparse representation [5], hybrid methods [6], saliency-based [7, 8] and others [9, 10], usually design a fixed representation model to extract features, adopt a specific fusion strategy for their corresponding combination, and then reconstruct a final result by the inverse operations. For example, Li et al. introduced MDLatLRR [5] where the latent low-rank representation was designed for the base and detail feature model, the weighted averaging and nuclear norm were proposed as fusion strategies. However, different imaging mechanisms represent contrasting modal characteristics, infrared image perceives the prominent targets through pixel brightness, while visible image describes the structural texture details through gradients and edges. The traditional methods do not fully take into account their modal discrepancies, and adopt the same representation model to distinctively extract features, which may fail to acquire the most effective inherent features, and is easy to produce a weak fusion performance. Besides, the designed fusion rules are usually hand-crafted and intends to become more and more complex, which inevitably limits the relevant practical applications.

Fig. 1: A contrastive example of Sandpath selected from TNO dataset. These images, in sequence, are infrared and visible images, the results of MDLatLRR [5], DenseFuse [12], FusionGAN [13] and our SwinFuse. Our result has more prominent target perception and more clarity detail description.

Different from traditional methods, deep learning can automatically extract different modal features by using a series of learnable filter banks, and has the strong nonlinear fitting ability to establish the complicated relationship between the input and output [11]. Typically, Li et al. presented DenseFuse [12], which proposed a convolutional neural network (CNN) to replace the traditional representation model for better feature extraction and reconstruction, and manually designed the corresponding fusion strategies. Ma et al.

introduced FusionGAN [13] in which the generator was continually optimized by the discriminator with adversarial learning to generate the desired fusion image, avoiding the manual design of activity level and fusion rules. Although their methods have achieved amazing performance, some issues need to be further addressed. Notably, the quality of the obtained fusion image is not only related to pixels within a small perceptive field, but also the pixel intensity and texture details of the overall image. On the one hand, their methods are constrained by the basic convolutional principle, and the interaction between the image and convolution kernel is content-independent. Utilizing the uniform convolution kernel to extract features from disparate modal images may not be the most effective way. On the another hand, their methods are based on the guideline of local processing, and establish the local deep features through a limited receptive field, but cannot model their long-range dependencies, which may lose some important contexts.

To address the above issues, we present a simple and strong baseline based on Swin Transformer [14] for infrared and visible image fusion, namely SwinFuse. Swin Transformer firstly partitions non-overlapping windows to model local attention, and then periodically shifts these windows to bridge global attention. More particularly, we propose residual Swin Transformer blocks, which are composed of several Swin Transformer layers, as backbone to extract the global features. Meanwhile, we adopt the residual connection as a shortcut to implement low-level feature aggregation and information retention. Our SwinFuse makes use of the fully attentional model to interact with image content and attention weights, and has a powerful ability for the long-range dependencies modeling, which can unleash the limitation of existing deep learning based models, and significantly promote the infrared and visible image fusion performance to a new level.

To demonstrate the visual performance of our SwinFuse, Fig.1 gives a contrastive example of Sandpath selected from the TNO dataset [15]. From the intuitive observation, MDLatLRR [5] and DenseFuse [12] preserve abundant texture details from visible image, but lose the typical target information from infrared image. On the contrary, FusionGAN [13] retains the high-brightness target of infrared image, while the corresponding target edges are fairly obscure and texture details are seriously losing. However, our SwinFuse achieves satisfactory results in reserving prominent infrared targets and rich visible details, and has a better image contrast.

Our SwinFuse includes three main contributions:

We build a fully attentional feature encoding backbone to model the long-range dependency, which only applies the pure transformer without convolutional neural networks. The obtained global attention features have the stronger representation ability in focusing on infrared target perception and visible detail description.

We design a novel feature fusion strategy based on -norm for sequence matrices. The activity levels of source images are measured from row and column vector dimensions, respectively. With this strategy, the obtained results can well retain the competitive brightness of infrared targets and distinct details of visible background.

We propose an infrared and visible image fusion transformer framework, and conduct a mass of experiments on different testing datasets. Our SwinFuse achieves surprising results and generalization, which transcends other state-of-the-art deep learning based methods in terms of subjective observation and objective comparison.

The rest of this paper is arranged as follows. Section II mainly introduces transformer in vision tasks and deep learning based methods. Section III illustrates the proposed network architecture and design philosophy. The experiments and discussions are presented in Section IV, and Section V draws the relevant conclusions.

Ii Related Work

In this section, we firstly introduce the application of the transformer in some vision tasks, and then emphatically illustrate the development of deep learning based methods.

Ii-a Transformer in Vision Tasks

Transformer [16] was originally designed for machine translation, and had achieved great success in natural language processing (NLP). In 2020, Dosovitskiy

et al. presented Vision Transformer (ViT) [17], which divided an image into 16

16 patches, and directly feed these patches into the standard transformer encoder. ViT splits an image into a linear embedded sequence and models the long-range dependency with self-attention meschanism, which generates promising results on certain tasks, such as image classification [18] and image retrieval [19]. However, high resolution images and visual elements varying in scale are very different from word tokens of NLP, which brings some significant challenges for adapting transformer from language domain to vision domain, specially transformer performance and computational efficiency.

To overcome the above limitations, Liu et al.

developed Swin Transformer [14] in which images were partitioned into local windows and cross-window through shifted operation, and limited attentional calculation in a corresponding window. Thus, its hierarchical architecture introduced the locality of convolutional operation, and obtained a lower computational complexity that is linear with the image size. Inspired by this work, researchers investigated its superiority for other computer vision tasks. For example, Liang

et al. presented SwinIR [20] for image restoration, which first proposed the convolutional layer to extract shallow features, and then adopted Swin transformer for deep feature extraction. Lin et al. introduced SwinTrack [21] to interact with the target object and search region for tracking. However, few studies have developed transformer into image fusion fields.

Ii-B Deep Learning-based Fusion Methods

Recently, deep learning models [22-31] possess strong capacities in terms of feature extraction and nonlinear data fitting, which have become the mainstream direction of image fusion tasks. Typically, Li et al. introduced DenseFuse [12] where an encoder with a convolution layer and a dense block was proposed for feature extraction, and a decoder including four convolution layers was used for feature reconstruction. Zhang et al. presented IFCNN [24] where the encoder and decoder respectively included two convolution layers, elementwise-maximum, minimum and mean were applied for fusion rules. These methods design the simple network and propose some appropriate fusion rules, but fail to consider the long-range dependency. To achieve better fusion performance, Jian et al. proposed a symmetric feature encoding and decoding network, namely SEDRFuse [25], which applied feature compensation and attention fusion to improve fusion performance. Wang et al. presented Res2Fusion [26] where multi-fields aggregated feature encoding backbone was constructed, and double nonlocal attention models were used for fusion strategies. Meanwhile, they also introduced UNFusion [27] in which a densely connected feature encoding and decoding network was exploited, and normalized attention models were designed to model the global dependency. The above methods need to manually design the corresponding fusion rules, and propose a non-end-to-end fusion model. To address this issue, Li et al. developed a two-stage learnable network, termed as RFN-Nest [28], which was an improved version of NestFuse [29], and proposed a learnable residual fusion network to replace hand-craft fusion strategies. Furthermore, PMGI [30] and U2Fusion [31] proposed a unified end-to-end network to satisfy several different fusion tasks simultaneously.

In addition, some researchers had exploited the generative adversarial network (GAN) [32-36] for image fusion, and achieved satisfactory results to some extent. Typically, Ma

et al.

firstly presented FusionGAN [13] where the adversarial learning network including a generator and a discriminator was proposed. Since only a discriminator is used, their results are biased towards infrared images and lack visible texture details. Subsequently, they also exploited dual-discriminator architecture, namely DDcGAN [32], to overcome the lack of a single discriminator and apply it for multi-resolution image fusion. Meanwhile, Ma et al. proposed GANMcC [33] where a multi-classification constrained adversarial network with main and auxiliary loss functions was designed to balance the gradient and intensity information. Although these GAN-based methods have achieved good performance, they still have limited ability in terms of highlighting thermal targets and unambiguous visible details. Ulteriorly, Yang

et al. [34] constructed a texture conditional generative adversarial network to capture texture map, and further proposed the squeeze-and-excitation module to highlight texture information. Li et al. presented a multi-grained attentional network, namely MgAN-Fuse [35], which integrated attention modules into the encoder-decoder network to capture the context information in the generator. Meanwhile, they also introduced AttentionFGAN [36] where a multi-scale attention module was integrated into both generator and discriminator.

The above-mentioned methods mainly depend on the convolutional layer to accomplish local feature extraction and reconstruction, and emphasize on the elaborate design of network architecture, such as dense block [12, 31], residual block [26, 28] and multi-scale characteristic [25, 27, 29], etc. Furthermore, some of them introduce the attention mechanism into the convolutional neural network to improve feature representation ability [25-27]. In particular, Qu et al. developed TransMEF [37] for multi-exposure image fusion, which integrated a CNN module and a transformer module to extract both local and global features. However, their method proposes the transformer as a supplement of CNN. Different from the existing methods, we introduce a pure transformer encoding backbone without the help of convolutional neural networks. With the stronger representation ability of the self-attention mechanism, our SwinFuse can bring a significant breakthrough for image fusion performance.

Iii Method

In this section, we firstly introduce the network architecture, and then emphasize on the design of residual Swin Transformer block, fusion strategy and loss function.

Fig. 2: The network architecture of our SwinFuse, which consists of the global feature extraction, fusion layer and feature reconstruction. Notably, in the training phase, the fusion layer need to be removed.

Iii-a Network Overview

As illustrated as Fig.2, our SwinFuse consists of three main parts: global feature extraction, fusion layer and feature reconstruction. Given the testing infrared and visible images (H, W and respectively represent the height, width and input channel number, =ir for infrared image, and =vis for visible image), we firstly use a convolutional layer with 11 kernel to implement positional encoding, and transform the channle from to . The initial features are defined by Eq.1.

(1)

where denotes the positional encoding, is the output channel number and set to 96. Notably, the convolution layer is an effective way for positional encoding, and transform an image space into a high-dimensional feature space. Subsequently, we reshape the initial features to sequence vectors , and apply the residual Swin Transformer blocks (RSTBs) to extract the global features , which are expressed by Eq.2.

(2)

where represents the m-th RSTB. With these operations, the global features of infrared and visible images are extracted. Then, we adopt a fusion layer based on -norm from row and column vector dimensions to obtain the fused global features , which is formulated by Eq.3.

(3)

where denotes the fusion operation. Finally, we again reshape the fused global features from to , and use a convolutional layer to reconstruct a fusion image , which is defined by Eq.4.

(4)

where represents the feature reconstruction. This convolutional layer is with 1

1 kernel, a padding of 0 and along with a Tanh activation layer.

Fig. 3: The fusion strategy of our SwinFuse. The left part performs row vetor normalization, while the right part performs column vetor normalization.

Iii-B Residual Swin Transformer Block

Fig.2(a) describes the architecture of residual Swin Transformer block (RSTB), which includes a series of Swin Transformer layers (STLs) along with a residual connection. Given the input sequence vectors , we apply n Swin Transformer layers to extract the intermediate global features , and the final output of RSTB is calculated by Eq.5.

(5)

where denotes the n-th Swin Transformer layer. Similar to the CNN architecture, the multi-layer Swin Transformer can effectively model the global features, and residual connection can aggregate the different levels of features.

The Swin Transformer layer, which is shown in Fig.2(b), first utilizes the NN sliding window to partition the input into the non-overlapping local windows, and computes their local attention. For the feature of local window , the matrices Q, K and V are calculated by Eq.6.

(6)

where , and are the learnable parameters of three linear projection layers with sharing across different windows, and d is the dimension of (Q, K). Meanwhile, the sequence matrices of self-attention mechanism are formulated by Eq.7.

(7)

where p is a learnable parameter for the positional decoding. Subsequently, the Swin Transformer layer computes again the standard multi-head self-attention (MSA) for the shifted windows. On the whole, it consists of a W-MSA and a SW-MSA, following by the multi-layer perceptron (MLP) with gaussian error linear units (GELU) nonlinearity in between them. A LayerNorm layer is applied before each of MSA and NLP, and a residual connection is employed for each module.

Iii-C Fusion Strategy

In the fusion layer, as illustrated as Fig.3, we design a novel fusion strategy based on -norm for the sequence matrices of infrared and visible images, and measure their activity level from row and column vector dimensions. For their respective global features, termed as and , we firstly calcuate their row vector weights by -norm, and adopt softmax to obtain their activity level, termed as and , which are expressed by Eq.8 and 9.

(8)
(9)

where denotes the -norm calculatation. Then, we directly multiply their activity level with the corresponding global features to obtain the fused global features from row vector dimension, termed as , which is formulated by Eq.10.

(10)

Subsequently, similar to the above operations, we measure their activity level from column vector dimension, termed as and , which are expressed by Eq.11 and 12.

(11)
(12)

And then, we can obtain the fused global features with column vector dimension, termed as , which is formulated by Eq.13.

(13)

Finally, we adopt element-wise addition operation for their fused global features in the row and column vector dimensions, and obtain the final fused global features, which is calcuated by Eq.14.

(14)

The obtained final fused global features are used to reconstruct the fusion image by a convolutional layer. It is worth noting that the fusion layer is only retained during the testing phase while removed during the training phase.

Iii-D Loss Function

In the training phase, we propose the structure similarity (SSIM) and as the loss functions to supervise the network training. The SSIM is defined as independent of image brightness and contrast, and reflects the attributes of structure information, such as scene details and structural texture, but it is prone to color deviation and brightness variation. Therefore, we again adopt the loss function to make up for its shortcomings. Therefore, the SSIM and loss functions are respectively defined as Eq.15 and 16.

(15)
(16)

where denotes the SSIM operation, and respectively represent the input and output images. Moreover, the total loss function is defined by Eq.17.

(17)

where is a hyper-parameter, and used to adjust the difference of order magnitude between and . In the section IV, we will discuss its impact on the fusion performance.

Iv Experiments and Analyses

In this section, we firstly introduce the experimental setup, and then focus on the discussion and analysis of the relevant experiments.

Iv-a Experimental Setup

During the training phase, we propose the MS-COCO [38] dataset, which consists of more than 80000 natural images with different categories, to train our SwinFuse network. To accommodate the network training, all images are transformed into the size of 224×224 and grayscale range [-1, 1]. Moreover, the numbers of RSTB and STL are set to 3 and 6. The patch size and sliding window size are set to 1

1 and 7

7. The head numbers of three RSTLs are set to 1, 2 and 4, respectively. In addition, we use the Adam as a optimizer, and set the learning rate, batchsize and epoch to 1

, 4 and 50, respectively. The training platform is with Intel I9-10850K CPU, 64GB RAM, NVIDIA GeForce GTX 3090 GPU.

During the testing phase, we adopt three datasets, namely TNO [15], Roadscene [39] and OTCBVS [40], to demonstrate the effectiveness of our SwinFuse, and successively select 20, 40 and 31 images from the corresponding datasets. Moreover, we transform the grayscale range of source images to -1 and 1, and ultilize the sliding window 224224 to partition them into several patches, where the value of invalid region is filled with 0. After the combination of each patch pair, we perform the reverse operation according to the previous partition order to obtain the final fusion image.

Meanwhile, we choose nine representative methods, namely, MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] to compared with our SwinFuse. Meanwhile, eight evaluation indexs, namely average gradient (AG), spatial frequency (SF) [41], standard deviation (SD) [42], multi-scale structure similarity (MS_SSIM) [43], feature mutual information by wavelet (FMI_w) [44], mutual information (MI) [45], the sum of the correlation differences (SCD) [46] and visual information fidelity for fusion (VIFF) [47], are selected for the fair and comprehensive comparisons.

Fig. 4: The subjective ablation comparisons of Sandpath selected from the TNO dataset. These images successively are source images and the results obtained by the proposed SwinFuse with different parameters , respectively.
Parameters AG SF SD MI MS_SSIM FMI_w SCD VIFF
1e0 6.03910 12.74940 46.70967 2.31879 0.91318 0.42281 1.80277 0.72934
1e1 5.99930 12.58878 46.29191 2.26808 0.91356 0.42102 1.80701 0.72750
1e2 6.19931 12.75545 46.88304 2.29560 0.92020 0.42521 1.81779 0.76083
1e3 6.19038 12.79886 46.90388 2.32346 0.92066 0.42618 1.84127 0.76068
1e4 6.12769 12.72406 46.54066 2.24862 0.91846 0.42361 1.83106 0.74579
TABLE I: The objective ablation comparisons of the TNO dataset for the different parameters .
Fig. 5: The subjective ablation comparisons of Sandpath for the RSTL framework. These images successively are source images, the fused results of RSTBs with m=2, 4, 5, STLs with n=5, 7, 8, without residual connection, Only_row, Only_col and the proposed SwinFuse, respectively.
Models Parameters AG SF SD MI MS_SSIM FMI_w SCD VIFF
RSTB
Number
2 6.12630 12.78277 46.64155 2.28553 0.92034 0.42579 1.84013 0.76044
3 6.19038 12.79886 46.90388 2.32346 0.92066 0.42618 1.84127 0.76068
4 6.12761 12.83733 46.87729 2.30659 0.91471 0.42474 1.81931 0.73768
5 6.15424 12.78313 46.83528 2.33390 0.91943 0.42503 1.83268 0.75333
STL
Number
5 6.18744 12.78088 46.65097 2.29336 0.91996 0.42522 1.84265 0.75654
6 6.19038 12.79886 46.90388 2.32346 0.92066 0.42618 1.84127 0.76068
7 6.07115 12.80509 45.74293 2.21655 0.90937 0.42057 1.80394 0.72002
8 6.09115 12.78692 46.47507 2.23655 0.92072 0.42582 1.82394 0.75977
Residual
Connection
No 6.06763 12.77942 46.32200 2.32840 0.91464 0.42297 1.81308 0.73333
Yes 6.19038 12.79886 46.90388 2.32346 0.92066 0.42618 1.84127 0.76068
Fusion
Layer
Only_row 4.88061 10.26035 35.62175 2.32627 0.90146 0.42048 1.74474 0.49377
Only_col 4.87315 10.21141 34.90279 2.38689 0.90309 0.42013 1.74862 0.49817
Ours 6.19038 12.79886 46.90388 2.32346 0.92066 0.42618 1.84127 0.76068
TABLE II: The objective ablation comparisons of the TNO dataset for the RSTL framework.

Iv-B Ablation Study

Iv-B1 The impact of parameter

In the design of loss function, we apply a hyper-parameter to balance the difference of order magnitude between and . Therefore, in this ablation study, we set to 1(1e0), 10(1e1), 100(1e2), 1000(1e3) and 10000(1e4) to verify the impact of different parameters on the fusion performance. The above-mentioned TNO dataset and eight evaluation indexes are selected for the experimental verification, and the corresponding optimal and suboptimal average values of evaluation indexes are labeled by red and blue.

Fig. 6: The subjective comparisons of 2_men_in_front_of_house selected from TNO dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.
Fig. 7: The subjective comparisons of Kaptein_1654 selected from TNO dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.

Fig.4 shows the subjective ablation comparisons of Sandpath for our SwinFuse with different parameters. From the visual effect, the disparity of these results is very weak, especially for the labeled typical target and local details. Meanwhile, their objective ablation results are presented in Table I. Our SwinFuse with =1e3 achieves the optimal values of SF, SD, MI, MS_SSIM, FMI_W and SCD, and the suboptimal values of AG and VIFF, which follow behind that of =1e2. This ablation study demonstrates that our SwinFuse with =1e3 obtains the best performance, and propose it for the subsequent experimental verification.

Iv-B2 The impact of RSTL framework

In the proposed network, our SwinFuse includes m residual Swin Transformer blocks (RSTBs) and n Swin Transformer layers (STLs). In this ablation study, we verify the numbers of RSTL and STL, as well as the residual connection, on the impact of the fusion performance. we set m to 2, 3, 4, 5, and n to 5, 6, 7, 8. Moreover, in the fusion layer, we also verify the impact of fusion strategy with only row vector dimension (termed as Only_row) and only column vector dimension (termed as Only_col). We select the TNO dataset for this ablation study.

Fig.5 gives the subjective comparisons of Sandpath. These images successively are source images, the fused results of RSTBs with m=2, 4, 5, STLs with n=5, 7, 8, without residual connection, Only_row, Only_col and the proposed SwinFuse, respectively. We can find that the visual disparities of different RSTBs, STLs and without residual connection are inconspicuous. Nevertheless, the visual effect of Only_row and Only_col are relatively poor and have low contrast. However, the proposed SwinFuse with m=3 and n=6 retains conspicuous infrared targets and unambiguous visible details.

In addition, Table II presents the objective ablation comparisons. In the design of RSTB and STL, when the number of RSTB is 3, our SwinFuse achieves the optimal values of AG, SD, MS_SSIM, FMI_w, SCD and VIFF, the suboptimal values of SF and MI. Moreover, when the number of STL is 6, our SwinFuse achieves the optimal values of AG, SD, MI, FMI_w and VIFF, the suboptimal values of SF, MS_SSIM and SCD. These results indicate that our SwinFuse has better fusion performance in the case of m=3 and n=6. Meanwhile, compared with without residual connection, Only_row and Only_col, the proposed SwinFuse achieve all the best indexes except for MI. The ablation studies indicate that the proposed network architecture is reasonable and effective.

Fig. 8: The subjective comparisons of soldier_in_trench_1 selected from TNO dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.
Fig. 9: The objective comparisons of eight evaluation indexes with other nine methods for the TNO dataset. The red dotted line represents our SwinFuse.

Iv-C Experiments on TNO Dataset

To demonstrate the effectiveness of our SwinFuse, we carry out experiments on the TNO dataset. Three typical examples, such as 2_men_in_front_of_house, Kaptein_1654 and soldier_in_trench_1, are shown in Fig.6-8. The traditional MDLatLRR has a limited feature extraction ability by using the latent low-rank representation, and the obtained results have some serious loss of details and brightness. The IFCNN and DenseFuse propose a simple network and average addition fusion strategy, the obtained results prefer to preserve more visible details while losing the brightness of infrared targets. Moreover, the RFN-Nest adopts a multi-scale deep framework with two-stage training, its results still cannot retain the typical infrared targets. The FusionGAN and GANMcC introduce the adversarial learning mechanism, and the obtained results retain sharpening infrared targets, while the visible details are severely fuzzy and lacking. The PMGI design gradient and intensity paths for information extraction, and the obtained fusion performance is improved to a certain degree. In addition, the SEDRFuse and Res2Fusion accomplish reversely superior visual effect, the main reason is that these two methods introduce attention-based fusion strategies, and improve feature representation capability to some extent. However, compared with other methods, our SwinFuse obtains the best visual perception in maintaining visible details and infrared targets.

To better display the visual effect, we mark some representative targets and details with red and green boxes, and enlarge the marked local details. In the results of Fig.6-8, for the pedestrian targets and typical details, such as the roof of the toolhouse, trees and the edges of the trench, the MDLatLRR, IFCNN, DenseFuse and RFN-Nest can preserve these typical details from visible images, while damage the brightness of these pedestrian targets. Inversely, the FusionGAN and GANMcC can maintain these high-brightness pedestrians from infrared images, but generate some sharpened effect with blurred edges. More seriously, the important details of visible images are unclear and even missing. By contrast, the PMGI, SEDRFuse and Res2Fusion obtain better results, while their retention ability is still limited. However, our SwinFuse almost maintains the complete infrared targets with high brightness and unambiguous visible details. On the whole, our SwinFuse generates a better visual effect and achieves a higher contrast, conforming to human visual observation and other machine vision tasks.

Subsequently, we continue to verify the proposed SwinFuse through the objective index evaluation. Fig.9 presents the objective comparable results of eight evaluation indexes with other nine methods for the TNO dataset. Notably, the horizontal coordinate represents the number of testing images, while the vertical coordinate denotes the average values of evaluation index for the corresponding images. The results of our SwinFuse are represented by the red dotted line. From these results, our SwinFuse wins the first level for AG, SF, SD, MS_SSIM, SCD and VIFF, the second level for MI and FMI_w, which are inferior to Res2Fusion. The objective comparisons indicate that our SwinFuse generates more superior fusion performance than state-of-the-art traditional and deep learning based methods, which can draw the same conclusion as the above subjective analysis.

Fig. 10: The subjective comparisons of FLIR_08835 selected from Roadscene dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.
Fig. 11: The subjective comparisons of FLIR_08094 selected from Roadscene dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.
Fig. 12: The objective comparisons of eight evaluation indexes with other nine methods for the Roadscene dataset. The red dotted line represents our SwinFuse.
Fig. 13: The subjective comparisons of video_1036 selected from OTCBVS dataset. These images are source images, the results of MDLatLRR [5], IFCNN [24], DenseFuse [12], RFN-Nest [28], FusionGAN [13], GANMcC [33], PMGI [30], SEDRFuse [25] and Res2Fusion [26] and our SwinFuse, respectively.
Fig. 14: The objective comparisons of eight evalution indexes with other nine methods for the OTCBVS dataset. The red dotted line represents our SwinFuse.

Iv-D Experiments on Roadscene Dataset

Next, we further testify the proposed SwinFuse on the Roadscene dataset, in which 40 image pairs are selected for the testing. Fig.10 and Fig.11 describe the subjective comparisons of two examples, i.e, FLIR_08835 and FLIR_08094. For the typical pedestrian targets, our SwinFuse achieves higher brightness in the fusion image compared with other nine methods. Similarly, for the typical details, i.e, street lamp and car, the results of our SwinFuse are complete and clear. Meanwhile, Fig.12 shows the objective experimental results. Our SwinFuse wins the first bank for AG, SF, SD, MS_SSIM, FMI_w, SCD and VIFF, and the suboptimal value of MI, which is in arrears of that of Res2Fusion. The subjective and objective experiments demonstrate our SwinFuse is more superior and transcends other methods.

In addition, from these objective evaluation indexes, the largest SCD and FMI_w indicate that our fused results maintain more similar feature retention from source images. The largest SF and MS_SSIM manifest that our results can reserve abundant structural texture and edge details. This is because our SwinFuse has a strong feature extraction capacity with a pure transformer encoding backbone, and the global features may possess a more favorable representation ability than the local features. Moreover, the largest AG, SD and VIFF indicate that our fusion images acquire higher definition and contrast. On the one hand, our SwinFuse introduces a self-attention mechanism, and the extracted attentional maps focus more on the salient information of source images. On the other hand, we develop a feature fusion strategy based on -norm, and make the fusion images well retain infrared competitive brightness information and visible distinct texture details. However, MI of our methods is a competitive value, the possible reason may be that the row and column vector normalization is proposed, and lead to a feature tradeoff in simultaneously retaining infrared thermal features and visible structural details. Even so, with multi-index evaluation, our SwinFuse achieves the best fusion performance.

Methods MDLatLRR IFCNN DenseFuse RFN-Nest FusionGAN GANMcC PMGI SEDRFuse Res2Fusion Ours
TNO 7.941 0.046 0.086 0.018 2.015 4.211 0.544 2.676 18.86 0.215
Roadscene 38.39 0.022 0.041 0.086 1.093 2.195 0.293 1.445 4.267 0.129
OTCBVS 19.56 0.011 0.023 0.052 0.491 1.017 0.126 0.803 1.337 0.097
TABLE III: The computational efficiency comparisons of different methods on three datasets (Unit: second).

Iv-E Experiments on OTCBVS Dataset

The public OTCBVS benchmark includes 12 video and image datasets, in which 31 image pairs selected from the OSU color-thermal database are used to demonstrate the generalization ability of our SwinFuse. Fig.13 shows a subjectively comparable example of video_1036. Compared with other methods, our SwinFuse obtains better intensity distribution with a clear target edge for the typical pedestrian target. Meanwhile, it also achieves more realistic scene detail for the parking lock. From all the above subjective comparisons, our SwinFuse has more superior fusion performance in maintaining infrared intensity distribution and visible texture details. Moreover, Fig.14 presents the objectively comparable results of the OTCBVS dataset, and our SwinFuse wins the first bank for AG, SF, SD, MS_SSIM, SCD and VIFF, the second and third banks for MI and FMI_w, respectively. In general, from the experimental results of three different datasets, the obtained optimal indexes of our SwinFuse are almost consistent, and its fusion performance is superior to other nine compared methods.

In addition, we continue to test the computational efficiency of our SwinFuse. Notably, all the methods are tested on the GPU except for the traditional MDLatLRR, which is performed on the CPU. Table III gives their comparable efficiency. From these results, our computational efficiency follows behind IFCNN, DenseFuse and RFN-Nest, because their methods constructed an ordinary network architecture with several convolutional layers, and designed an average addition fusion strategy. However, our SwinFuse has a competitive computational efficiency, which is based on Swin Transformer hierarchical architecture, and has a linear computational complexity to image size. Therefore, we can conclude that our SwinFuse has better fusion performance, stronger generalization ability and competitive computational efficiency. Meanwhile, compared with the CNN, the pure transformer encoding backbone may be a more effective way to extract deep features for the fusion tasks.

V Conclusion

In this paper, we present a residual Swin Transformer fusion network for infrared and visible images. our SwinFuse consists of three main parts: the global feature extraction, fusion layer and feature reconstruction. Especially, we build a fully attentional feature encoding backbone to model the long-range dependency, which only adopts a pure transformer without convolutional neural networks. The obtained global features have a stronger representation ability than the local features extracted by the convolutional operations. Moreover, we design a novel feature fusion strategy based on -norm for sequence matrices, and measure the activity of source images from row and column vector dimensions, which can well retain competitive infrared brightness and distinct visible details.

We conduct a mass of experiments on the TNO, Roadscene and OTCBVS datasets, and compare it with other nine state-of-the-art traditional and deep learning methods. The experimental results demonstrate that our SwinFuse is a simple and strong fusion baseline, and achieves remarkable fusion performance with strong generalization ability and competitive computational efficiency, transcending other methods in subjective observations and objective comparisons. In future work, through overcoming the hand-craft fusion strategy, we will develop our SwinFuse to an end-to-end model, and extend it to settle other fusion tasks such as multi-focus and multi-exposure images.

References

  • [1] Z. Feng, J. Lai and X. Xie, “Learning modality-specific representations for visible-infrared person re-identification,” IEEE Trans. Image Process., vol. 29, pp. 579-590, 2020.
  • [2] X. Zhang, P. Ye, H. Leung, K. Gong and G. Xiao, “Object fusion tracking based on visible and infrared images: A comprehensive review,” Inf. Fusion, vol. 63, pp. 166-187, 2020.
  • [3] W. Zhou, Y. Zhu, J. Lei, J. Wan and L. Yu, “CCAFNet: Crossflow and cross-scale adaptive fusion network for detecting salient objects in RGB-D images,” IEEE Trans. Multimedia, 2021. doi: 10.1109/TMM.2021.3077767.
  • [4] Z. Wang, F. Yang, Z. Peng, L. Chen, and L. Ji, “Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation,” Optik, vol. 126, no. 23, pp. 4184-4190, 2015.
  • [5] H. Li, X. Wu and J. Kittler, “MDLatLRR: A novel decomposition method for infrared and visible image fusion,” IEEE Trans. Image Process., vol. 29, pp. 4733-4746, 2020.
  • [6] Z. Wang, J. Xu, X. Jiang, and X. Yan, “Infrared and visible image fusion via hybrid decomposition of NSCT and morphological sequential toggle operator,” Optik, vol. 201, no. 1, 2020, Art no. 163497.
  • [7] Y. Yang, Y. Zhang, S. Huang, Y. Zuo and J. Sun, “Infrared and visible image fusion using visual saliency sparse representation and detail injection model,” IEEE Trans. Instrum. Meas., vol.  70, pp. 1-15, 2021, Art no. 5001715.
  • [8] J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infr. Phys. Technol., vol. 82, pp. 8-17, 2017.
  • [9] Y. Yang, W. Zhang, S. Huang, W. Wan, J. Liu and X. Kong, “Infrared and visible image fusion based on dual-kernel side window filtering and S-shaped curve transformation,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1-15, 2022, Art no. 5001915.
  • [10] P. Hu, F. Yang, H. Wei, L. Ji and D. Liu, “A multi-algorithm block fusion method based on set-valued mapping for dual-modal infrared images,” Infr. Phys. Technol., vol. 102, 2019, Art no. 102977.
  • [11] H. Zhang, H. Xu, X. Tian, J. Jiang and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Inf. Fusion, vol. 76, pp. 323-336, 2021.
  • [12] H. Li and X. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Process., vol. 28, no. 5, pp. 2614-2623, 2019.
  • [13] J. Ma, W. Yu, P. Liang, C. Li and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion, vol. 48, pp. 11-26, 2019.
  • [14] Z. Liu, Y. Liu, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin and B. Guo, “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCVW), 2021, pp. 9992-10002.
  • [15] A. Toet(2014).TNO Image Fusion Dataset. Figshare.Data.[Online]. Available: https://figshare.com/articles/TN_Image_Fusion_Dataset/1008029.
  • [16] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” preprint arXiv:1706.03762, 2017.
  • [17] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit and N. Houlsby “An image is worth 16x16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. Learn. Represent.(ICLR), 2021.
  • [18] H. Dong, L. Zhang and B. Zou, “Exploring vision transformers for polarimetric SAR image classification,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-15, 2022, Art no. 5219715.
  • [19] T. Li, Z. Zhang, L. Pei and Y. Gan, “HashFormer: Vision Transformer based deep hashing for image retrieval,” IEEE Signal Process. Lett., 2022. doi: 10.1109/LSP.2022.3157517.
  • [20] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool and R. Timofte, “SwinIR: Image Restoration Using Swin Transformer,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCVW), 2021, pp. 1833-1844.
  • [21] L. Lin, H. Fan, Y. Xu and H. Ling, “SwinTrack: A Simple and Strong Baseline for Transformer Tracking,” preprint arXiv:2112.00995, 2021.
  • [22] H. Xu, X. Wang and J. Ma, “DRF: Disentangled representation for visible and infrared image fusion,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-13, 2021, Art no. 5006713.
  • [23] J. Ma, L. Tang, M. Xu, H. Zhang and G. Xiao, “STDFusionNet: An infrared and visible image fusion network based on salient target detection,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-13, 2021, Art no. 5009513.
  • [24] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “Ifcnn: A general image fusion framework based on convolutional neural network,” Inf. Fusion, vol. 54, pp. 99-118, 2020.
  • [25] L. Jiang, X. Yang, Z. Liu, G. Jeon, M. Gao and D. Chisholm, “SEDRFuse: A symmetric encoder-decoder with residual block network for infrared and visible image fusion,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-15, 2021, Art no. 5002215.
  • [26] Z. Wang, Y. Wu, J. Wang,J. Xu and W. Shao, “Res2Fusion: Infrared and visible image fusion based on dense Res2net and double non-local attention models,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1-12, 2022, Art no. 5005012.
  • [27] Z. Wang, J. Wang, Y. Wu, J. Xu and X. Zhang, “UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion,” IEEE Trans. Circuits Syst. Video Technol., 2021. doi: 10.1109/TCSVT.2021.3109895.
  • [28] H. Li, X. Wu and J. Kittler, “RFN-Nest: An end-to-end residual fusion network for infrared and visible images,” Inf. Fusion, vol. 73, pp. 72-86, 2021.
  • [29] H. Li, X. Wu and T. Durrani, “Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models,” IEEE Trans. Instrum. Meas., vol. 69, no. 12, pp. 9645-9656, 2020.
  • [30] H. Zhang, H. Xu, Y. Xiao, X. Guo and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 7, pp. 12797-12804, 2020.
  • [31] H. Xu, J. Ma, J. Jiang, X. Guo and H. Ling, “U2fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 1, pp. 502-518, 2022.
  • [32] J. Ma, H. Xu, J. Jiang, X. Mei and X. Zhang, “DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Trans. Image Process., vol. 29, pp. 4980-4995, 2020.
  • [33] J. Ma, H. Zhang, Z. Shao, P. Liang and H. Xu, “GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-14, 2021. Art no. 5005014.
  • [34] Y. Yang, J. Liu, S. Huang, W. Wan, W. Wen and J. Guan,, “Infrared and visible image fusion via texture conditional generative adversarial network,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 12, pp. 4771-4783, 2021.
  • [35] J. Li, H. Huo, C. Li, R. Wang, C. Sui and Z. Liu, “Multigrained attention network for infrared and visible image fusion,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-12, 2021.
  • [36] J. Li, H. Huo, C. Li, R. Wang and Q. Feng, “AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks,” IEEE Trans. Multimedia, vol. 23, pp. 1383-1396, 2021.
  • [37] L. Qu, S. Liu, Y. Xiao, M. Wang and Z. Song, “TransMEF: A transformer-based multi-exposure image fusion framework using self-supervised multi-task learning,” in Proc. AAAI Conf. Artif. Intell., 2022.
  • [38] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds.   Cham: Springer International Publishing, 2014, pp. 740-755.
  • [39] H. Xu(2020). Roadscene Database. [Online]. Available: https:// github.com/hanna-xu/RoadScene.
  • [40] S. Ariffin(2016). OTCBVS Database. [Online]. Available: http://vcipl-okstate.org/pbvs/bench/.
  • [41] Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiere and W. Wu, “Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 1, pp. 94-109, 2011.
  • [42] Y. Rao, “In-fibre bragg grating sensors,” Meas. Sci. Technol., vol. 8, no. 4, pp. 355-375, 1997.
  • [43] K. Ma, K. Zeng and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3345-3356, 2015.
  • [44] G. Qu, D. Zhang and P. Yan, “Information measure for performance of image fusion,” Electron. Lett., vol. 38, no. 7, pp. 313-315, 2002.
  • [45] A. Eskicioglu and P. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun., vol. 43, no. 12, pp. 2959-2965, 1995.
  • [46] V. Aslantas and E. Bendes, “A new image quality metric for image fusion: The sum of the correlations of differences,” AEU-Int. J. Electron. C., vol. 69, no. 12, pp. 1890-1896, 2015.
  • [47] Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion, vol. 14, no. 2, pp. 127-135, 2013.