PPT Fusion: Pyramid Patch Transformerfor a Case Study in Image Fusion

07/29/2021 ∙ by Yu Fu, et al. ∙ 3

The Transformer architecture has achieved rapiddevelopment in recent years, outperforming the CNN archi-tectures in many computer vision tasks, such as the VisionTransformers (ViT) for image classification. However, existingvisual transformer models aim to extract semantic informationfor high-level tasks such as classification and detection, distortingthe spatial resolution of the input image, thus sacrificing thecapacity in reconstructing the input or generating high-resolutionimages. In this paper, therefore, we propose a Patch PyramidTransformer(PPT) to effectively address the above issues. Specif-ically, we first design a Patch Transformer to transform theimage into a sequence of patches, where transformer encodingis performed for each patch to extract local representations.In addition, we construct a Pyramid Transformer to effectivelyextract the non-local information from the entire image. Afterobtaining a set of multi-scale, multi-dimensional, and multi-anglefeatures of the original image, we design the image reconstructionnetwork to ensure that the features can be reconstructed intothe original input. To validate the effectiveness, we apply theproposed Patch Pyramid Transformer to the image fusion task.The experimental results demonstrate its superior performanceagainst the state-of-the-art fusion approaches, achieving the bestresults on several evaluation indicators. The underlying capacityof the PPT network is reflected by its universal power in featureextraction and image reconstruction, which can be directlyapplied to different image fusion tasks without redesigning orretraining the network.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, Self-attention models have obtained wide attention with promising performance in many visual tasks. In NIPS 2017, Vaswani et al. proposed the Transformer structure, that is originally designed for the NLP tasks, such as Bert

[4] using Transformer as encoder, GPT [36] using Transformer as decoder, Transformer-XL[3] solving the problem of long sequences, etc. It is also widely used in the field of Recommend Systems to improve its performance, such as BST [2] for behavioral sequence modeling, Autoint [38] for feature combination of CTR(Click-Through-Rate) prediction model, re-ranking model PRM [33], etc. Recently in the field of computer vision, many excellent methods demonstrate that Transformer can obtain promising performance, such as image classification ViT [5], object detection DETR [48], semantic segmentation SETR [46], 3D point cloud processing Point Transformer [32], image generation TransGAN [12], etc.

The seminal work ViT proves that using a pure Transformer network could achieve the SOTA in image classification. Specifically, ViT splits the input into 14

14 or 16

16 patches. Each patch is flattened to a vector, acted as a token for the Transformer system. ViT is the first fully-transformer model to extract image features, but it still suffers the following limitations:

1) ViT requires a huge dataset such as the JFT-300M dataset for pre-training to better explore the relationship between pixels. It cannot obtain satisfactory results using a midsize dataset such as ImageNet.

2) The Transformer structure in ViT extracts the non-local interactions from the entire image, sacrificing the merit in learning local patterns, includes texture and edges. We believe that the local patterns are essential for visual recognition tasks, which is proved by existing CNN techniques. In principle, CNN extract various shallow features, and then obtain semantic information through multi-layer nonlinear stacking. Therefore, features obtained by ViT are difficult to generate high-resolution images, which require more low-level representations.

Based on this, we design a new fully-transformer model to address the above issues:

1) To extract low-level visual representations, we divide the image into several appropriate size patches. For each patch, the transformer is used to calculate the global correlation features of all pixels, with which the source image can be reconstructed. Therefore, we can obtain a set of low-level features of the image, sharing similar characteristics. We coin this module as ”Patch Transformer”.

2) To explicitly extract the global patterns of the image, we design a down-sampling pyramid to achieve local-to-global perception. Specifically, we downsample the input image once, thus the size becomes a quarter of the original. We perform Patch Transformer on the down-sampled image. After repeating downsampling and performing the Patch Transformer until the image is the same size as the patch. The obtained features corresponding to different scales are up-sampled to the original size. We name this operation as ¡°Pyramid Transformer¡±.

Based on the Patch Transformer and Pyramid Transformer, we develop a Pyramid Patch Transformer (PPT) image feature extraction model. The PPT model can fully extract the features of the input image, including local context and global saliency. We further design a reconstruction auto-encoder network to ensure that the extracted features can reconstruct the input image.

To verify the effectiveness of the proposed PPT model, we apply this feature extraction approach to the image fusion task. The input images are obtained by different kinds of sensors, such as infrared images and visible light images, medical CT images and X-ray images, or images with different focus, images with different exposures and so on. These multi-source images reflect different physical attributes of the same scene. An example of the fusion of a visible light image and an infrared image is shown in Fig.1.

Fig. 1: Image Fusion. (a) is a visible light image. (b) is an infrared image. (c) is a fused image.

Image fusion approach based on deep learning can be divided into two categories: 1) The method based on autoencoders uses an encoder to extract features into the latent space for feature fusion, and then the fused features are input to the decoder to obtain the fused image

[35, 19, 17, 7]

. 2)The end-to-end fusion network designs a suitable structure and loss function to realize the end-to-end image generation

[28, 27, 6].

The Pyramid Patch Transformer model can extract a variety of features from the input image. We use PPT model as the feature extraction network, and then design a feature decoder for feature compression and image reconstruction. In summary, our contributions are three-fold:

  • An improved Patch Transformer to extract low-level image representations without loss of resolution, integrating the interactions among raw pixels.

  • A novel Pyramid Transformer to reflect the global relationships of multi-scale patches, achieving local-to-global perception.

  • A new Pyramid Patch Transformer as a general feature extraction module, which is successfully applied to image fusion tasks with superior performance against the state-of-the-art methods.

Ii Related Work

Ii-a Transformer

A Transformer encoder is composed of a multi-head self-attention layer (MSA) and a Multi-layer Perception (MLP) block. And before each MSA and MLP layer, Layer Norm (LN) is used, with the residual structure. The essential design of Transformer is that the input vector is combined with the position embedding to preserve the localization clues for each vector. To introduce the basic Transformer to the visual task, ViT splits the input image into a sequence of several patches , where are the corresponding resolution and number of channels, is the width / height of the patch, and . ViT maps these patches to the dimensional features after forward passing the network. The obtained output at is the classification result. A ViT network structure for image classification is as follows,

(1)

Ii-B Image Fusion

In ICCV 2017, Prabhakar et al. proposed DeepFuse [35] approach, introducing the auto-encoder structure for multi-exposure image fusion task. Specifically, DeepFuse trains the auto-encoder network, with the encoder extracting features of the image. After performing an addition fusion strategy in the middle layer, the fused features are input to the decoder to obtain the fused image. Similar structures are further developed by DenseFuse [19] and IFCNN[45]. The general steps of these auto-encoder-based image fusion methods are as follows:

(2)
Fig. 2: Illustration of the Patch Transformer. The first step is to split the input to multiple patches. A sliding window is used to split the image into a sequence of patches. Each patch is reshaped into a 1-D vector, and an MLP is used to extend the channels. The second step is to use the multi-layer Transformer module to obtain the non-local representations within this patch. The third step is to reconstruct the two-dimensional patch from the learned representations.

Iii Pyramid Patch Transformer

For high-resolution images, most of the existing transformer approaches split the image into several patches. Suppose the resolution of an image is , we split it into vectors with each patch being , where , . Therefore, the features with size are multi-channel features of the original image after special down-sampling and non-linear changes. However, the original image information is mapped into a low-dimensional feature space, with semantic meaning and discrimination, this it is difficult to reconstruct the original image with the obtained Transformer features.

To reflect the pixel details, a straightforward solution is to set the size of the patch to 1. This means that the Transformer is performed on the original resolution image, resulting in resource problems. For example, if a transformer is applied to a image, at least one attention matrix of parameter will be generated, which requires huge memory. In order to overcome this problem, we propose the Pyramid Patch Transformer, a network framework that uses fully-transformer for image feature extraction.

Iii-a Patch Transformer

The Patch Transformer module is designed to alleviate the memory consumption caused by general transformers with exceeding tokens when processing large-resolution images. Each Patch Transformer module contains three steps: 1, Trans to Patches, 2, Transformer and 3, Reconstruct. The Patch Transformer process is shown in the Fig. 2.

Fig. 3: Pyramid Transformer. The input image is down-sampled times. Each time the down-sampled image employs the Patch Transformer to extract features to obtain . Finally, all different scales features are upsampled and concatenated to obtain .

As shown in the Fig. 2, given an input image , will first be split into patches by a sliding window with size, where . We name this operation T2P. We can obtain a sequence of patch features , , . We reshape each into a 1-D vector. is reshaped into . In order to enhance the information, we use MLP to increase the dimension of to obtain , equivalent to a sequence of patch features with size with and channels, , .

(3)

We set a learnable position embedding vector . We extend the vector to the same dimension as , , enabling learning the location clues among the embedding vectors. Therefore, we can obtain the feature .

(4)

Then we conduct transformers for each patch in , . The Transformer encoder module can be applied several times in the network. Each Transformer module is divided into two steps,multi-head self-attention layer (MSA) and Multi-layer Perception (MLP). A standard Transformer module with LayerNorm and Residual structure is shown in the Fig. 4.

(5)
Fig. 4: Transformer module.

We restore according to the order of split . The corresponding output is a set of features mapped from the original image to the latent space.

(6)

Iii-B Pyramid Transformer

Using the above Patch Transformer, each input image will be split into several patches. The representations of each patch are only related to the pixels within the patch, withot considering the long-range dependency between pixels in the entire image. To address this issue, we refer to the multi-scale approach with the following design to construct a pyramid structure. First, the image is down-sampled once to obtain an image with size of , . Apply the corresponding Patch Transformer to to get the representations , . Then is upsampled to obtain with the same size as the input image .

(7)

1) Continue to downsample the image ,

2) Use Patch Transformer to extract representations,

3) Upsample to to get .

Repeat the above operations recursively until the the downsample image can be split into one patch. We perform these operations times. Suppose the image is of size, and the spitted patch is of size, we can obtain . After concatenating all the features at different scales, we get a set of multi-scale features , as shown in Fig.3.

Iii-C Transformer Receptive Field

In general, in Deep Learning, CNN performs well in the Computer Vision field. One of the important roles is the receptive field of CNN, it can effectively capture the local features in the image. With the size of the convolution kernel increasing or the depth of the convolution layer deepening, each cell of the feature maps reflects a relevant spatial region in the original image.

Fig. 5: Transformer Receptive Field. a) is the mapping of the feature pixels in the middle of the Pyramid Transformer. b) shows the mapping in the bottom layer of the Pyramid Transformer.

As shown in the Fig. 5 (a), a Patch Transformer is performed on the image, with each pixel of its feature being associated with all the pixels of the entire patch. This patch size can be considered as the receptive field of this Patch Transformer. If you downsample once, the receptive field will be four times larger, as the length and width of the image are both half.

With the gradual deepening of the pyramid, the range of associated pixels expands from the local area to the global. The receptive field of the Patch Transformer has also become larger. In particular, the pixels that are close to each other contribute more correlation, and the pixels with a long distance preserve weak long-range dependence. With the bottom Patch Transformer layer in the Pyramid Transformer, the receptive field is expanded to the whole images, as shown in the Fig.5 (b). The continuous down-sampling is designed to obtain a large receptive field here. A large receptive field on the original image captures more large-scale or global semantic features with less detail information. While the upper several layers in the Pyramid Transformer captures low-level details. Therefore, we believe that the Pyramid Transformer can extract both shallow and semantic information simultaneously.

Iii-D Network Architecture

We design the auto-encoder network for image reconstruction, as shown in the Fig. 6. The encoder is composed of the Pyramid Transformer and the Patch Transformer. After obtaining a set of multi-scale features after encoding, the reconstructed image can be generated with the decoder.

Fig. 6: Network Architecture.

The decoder is an MLP composed of two Fully Connected (FC) layers, and uses the GELU [11]activation function and Tanh activation to output.

(8)

We use the mean square error (MSE) loss function as the reconstruction loss for the network.

(9)

Iii-E Features Visualization

As shown in the Fig. 7, we obtain the extracted features by the PPT module. We select the features from three different receptive fields in the Pyramid Transformer. In the first row with the smallest receptive field, it can be seen that the features represent more low-level features such as edge contour and color distribution of the image. While in the third row of the features with the largest receptive field, it can be seen that the features represent the concerned area of the related object, reflecting the semantic related feature of the pixels.

Fig. 7: Features Visualization. Features in the first row are generated by the top layer in the Pyramid Transformer. They have the smallest receptive field and represent low-level features. Features in the third row are generated by the bottom layer of the Pyramid Transformer. They have the largest receptive fields and represent semantic features.

Iv Pyramid Patch Transformer For image Fusion

The primary purpose of the image fusion task is to generate a fusion image that contains as much useful information as possible from the two source images. We use the designed PPT module to extract the image features for image fusion tasks.

Fig. 8: Image Fusion Network. Input the multi-source images to the PPT module to obtain the multi-source features. The fused features are decoded by MLP to obtain the fused image.

Iv-a Fusion Network Architecture

We take the infrared image and visible light image fusion task as an example. We input the visible light image and the infrared image into the pre-trained PPT encoder module to obtain and . The PPT encoder can map any image to a high-dimensional feature space to obtain features . These features can represent the input image from different angles such as edge, texture, color distribution, semantic information, etc.. As we use the Siamese structure with a same PPT encoder module to extract features, and are mapped to the same feature space. We can easily perform fusion operations on and across the channel dimension, thus get a new fused feature representation .

Because the features obtained by the PPT encoder can be reversely mapped to the original space by the trained MLP decoder in Equ.(8), the we calculated is also in the same feature space, we can reconstruct the fused features to a fusion image through the same MLP decoder, as show in Fig. 8.

Iv-B Fusion Strategy

For different image fusion tasks, we choose different fusion strategies. All fusion strategies operate at the pixel level of features, as shown in Fig. 8.

For the fusion task of infrared image and visible light image, we believe that the two images are not obvious biased in feature selection. We decide to use the average strategy to obtain their fusion features,

.

For the multi-focus image fusion tasks, as the focus of the image are different, the features of the focused area are more prominent than the unfocused area. We believe that the fused pixel should be the more obvious one. Therefore,we adopt the maximum value strategy, .

In addition to these two common fusion strategies, we propose a Softmax strategy, which can be used for multiple image fusion tasks at the same time. To adaptively trade off the significance between the two input images, Softmax is employed to fuse the two features, .

Fig. 9: Comparison of our PPT Fusio with 18 state-of-the-art methods on one pair of Visible light and Infrared image in the (1) TNO dataset and (2) RoadScene dataset.
TNO Road
Methods SCD SSIM CC SCD SSIM CC
CBF 1.3193 0.6142 0.8888 0.0621 0.7395 0.6560 0.8784 0.6028 0.8312 0.0388 0.7387 0.7118
CVT 1.5812 0.7025 0.9156 0.0275 0.8118 0.7111 1.3418 0.6641 0.8643 0.0318 0.7568 0.7439
DTCWT 1.5829 0.7057 0.9186 0.0232 0.8182 0.7121 1.3329 0.6567 0.8509 0.0412 0.7326 0.7420
GTF 1.0516 0.6798 0.9056 0.0126 0.7536 0.6514 0.8072 0.6748 0.8654 0.0096 0.7037 0.6961
MSVD 1.5857 0.7360 0.9036 0.0022 0.7888 0.7280 1.3458 0.7128 0.8459 0.0034 0.7356 0.7518
RP 1.5769 0.6705 0.8929 0.0583 0.7542 0.7124 1.2829 0.6341 0.8408 0.0773 0.7229 0.7300
DeepFuse 1.5523 0.7135 0.9041 0.0202 0.7698 0.7243 0.5462 0.4601 0.8215 0.2213 0.5387 0.6687
DenseFuse 1.5329 0.7108 0.9061 0.0352 0.7847 0.6966 1.3491 0.7404 0.8520 0.0001 0.7602 0.7543
FusionGan 0.6876 0.6235 0.8875 0.0352 0.6422 0.6161 0.8671 0.6142 0.8398 0.0168 0.6433 0.7312
IFCNN 1.6126 0.7168 0.9007 0.0346 0.8257 0.7004 1.3801 0.7046 0.8509 0.0315 0.7606 0.7647
MdLatLRR 1.6248 0.7306 0.9148 0.0216 0.8427 0.7137 1.3636 0.7369 0.8575 0.0004 0.7758 0.7527
DDcGAN 1.3831 0.5593 0.8764 0.1323 0.6571 0.7079 0.5462 0.4601 0.8215 0.2213 0.5387 0.6687
ResNetFusion 0.1937 0.4560 0.8692 0.0550 0.4247 0.3887 0.2179 0.3599 0.7963 0.0550 0.3462 0.5376
Nestfuse 1.5742 0.7057 0.9029 0.0428 0.7833 0.7006 1.2583 0.6666 0.8571 0.0459 0.6894 0.7539
FusionDN 1.6148 0.6201 0.8833 0.1540 0.7328 0.7170 1.1882 0.6454 0.8423 0.0780 0.7658 0.7204
HybridMSD 1.5773 0.7094 0.9083 0.0435 0.8208 0.7072 1.2642 0.6961 0.8552 0.0460 0.7679 0.7565
PMGI 1.5738 0.6976 0.9001 0.0340 0.7632 0.7281 1.0989 0.6640 0.8487 0.0146 0.7606 0.7195
U2Fusion 1.5946 0.6758 0.8942 0.0800 0.7439 0.7238 1.3551 0.6813 0.8453 0.0671 0.7831 0.7266
Average 1.6261 0.7487 0.8954 0.0060 0.7719 0.7313 1.5888 0.7528 0.8982 0.0109 0.7953 0.7183
Ours Softmax 1.5858 0.7568 0.9077 0.0005 0.7945 0.7401 1.5870 0.7505 0.8959 0.0145 0.7836 0.7159
TABLE I: Visible and Infrared Image Fusion Quantitative Analysis. This table contains quantitative analysis indicators of the TNO and RoadScene datasets.
Methods STD EN CC VIFF CrossEntropy SSIM MI
GFF 50.1114 7.2605 0.9167 0.9545 0.7699 0.0634 0.8147 0.8867 14.5211 0.0115
LPSR 50.6157 7.2640 0.9185 0.9557 0.8102 0.0672 0.8154 0.8845 14.5279 0.0731
MFCNN 50.2571 7.2538 0.9127 0.9544 0.7699 0.0643 0.8134 0.8870 14.5075 0.0015
densefuse 52.6174 7.2882 0.8875 0.9671 0.7781 0.5948 0.8382 0.8744 14.5765 0.0110
IFCNN 49.7551 7.2384 0.9112 0.9600 0.7742 0.0779 0.8300 0.8735 14.4768 0.0762
Max 60.6446 7.5239 0.9176 0.9870 0.8086 0.0242 0.8802 0.8911 15.0478 0.0101
Ours Softmax 60.3025 7.5252 0.9182 0.9871 0.8020 0.0278 0.8818 0.8911 15.0504 0.0103
TABLE II: Multi-focus Image Fusion Quantitative Analysis. This table contains quantitative analysis indicators of the Lytro datasets.

V Evaluation

V-a Datasets and Implementation

We perform experiments on two image fusion tasks, i.e., multi-focus image fusion, infrared and visible light image fusion.

In the infrared image and visible light image fusion task, we use the TNO dataset [39] and the RoadScene dataset [43]. For the RoadScene dataset, we convert the images to gray scale to keep the the visible light image channels consistent with infrared image. For the multi-focus image fusion task, we use the Lytro dataset [30]. The Lytro image are split according to the RGB channels to obtain three pairs of images. The fusion result is merged according to the RGB to obtain a fused image.

As the network input is a fixed size , we split the input image into several patches with a sliding window of size, filling the insufficient area with the value 128 (the pixel range is 0255). After fusing each patch pair, the final fused image is obtained by splicing according to the order of patch split.

V-B Experiments Setting

We input the image to the network, and the size of the patch . The optimizer is selected as Adam [13] with a learning rate 1e-4. The batch size is 1. We set the total training times of the network to 50 times. The experiments are performed on an NVIDIA Geforce GTX1080 GPU and 3.60GHz Intel Core i7-6850K with 64GB of memory.

V-C Quantitative Analysis

V-C1 Visible and Infrared Image Fusion

We compare PPT Fusion with the eighteen state-of-the-art methods, including Cross Bilateral Filter fusion method(CBF) [15], Curvelet Transform(CVT) [31], Dual-Tree Complex Wavelet Transform(DTCWT) [16], Gradient Transfer(GTF) [24]

, Multi-resolution Singular Value Decomposition(MSVD)

[29], Ratio of Low-pass Pyramid(RP) [40], Deepfuse [35], DenseFuse [19], FusionGan [28], IFCNN [45], MDLatLRR [18], DDcGan [26], ResNetFusion [25], NestFuse [17], FusionDN [43], HybridMSD [47], PMGI [44], and U2Fusion [42], respectively.

As shown in Fig. 9, we report the results of all approaches and highlight some specific local areas. It can be seen that the fusion result of our PPT Fusion retains the necessary person radiation information. The global semantic feature of our results is more obvious, that is, the contrast between the sky and the house. After highlighting the details of the branches, our results reflect more details from both the visible light image and the infrared image.

We using six related indicators to quantitatively evaluate the fusion quality, namely Sum of Correlation Coefficients (SCD) [1], Structural SIMilarity (SSIM) [41], pixel feature mutual information() [8], [14], [23] and correlation coefficient (CC) [9]. SCD and CC calculates the correlation coefficients between images. SSIM and

calculate the similarity between images.

calculate the mutual information between features. represents the ratio of noise added to the final image. Among them, the lower the value of the , and the higher other values, the better the fusion quality of the approach.

As shown in Table. I, the best value in the quality table is made the bold red font in italic, and the second-best value is in the bold black font in italic. It can be seen that PPT Fusion rank in top 2 in multiple indicators. Other indicators are also better than most methods. It can be demonstrated that PPT Fusion maintains a effective structural similarity with the source images, preserving a large information correlation with the source images, without introducing noise, artifacts, etc.

V-C2 Multi-focus Image Fusion

We compare PPT Fusion with the eighteen state-of-the-art methods, and they are Guided Filtering Fusion(GFF) [20], Laplace Pyramid Sparse Representation(LPSR) [22], MFCNN [21], DenseFuse [19] and IFCNN [45], as shown in Fig. 10.

Fig. 10: Comparison of PPT Fusion result with 6 state-of-the-art methods on one pair of multi-focus image in the Lytro dataset.

On the basis of the previous five indicators: SCD, SSIM, , , and CC, we added four additional indicators, namely Entropy(EN) [37], Visual Information Fidelity(VIFF) [10], Cross Entropy [14], Mutual Information (MI)[34]. EN measure the amount of information. VIFF is used to measure the loss of image information to the distortion process. CrossEntropy and MI measure the degree of information correlation between images. Among them, the lower the value of the CrossEntropy and the higher other values, the better the fusion quality of the approach.

From Table 2, we can see that PPT Fusion can rank in top 2 in all indicators. This shows that the fusion image of PPT Fusion effectively extract the source details while making the generated image clear enough.

Vi Conclusion

In this study, we propose a feature extraction module that uses Fully-Transformer, termed as the Pyramid Patch Transformer (PPT) module. First, the Patch Transformer we proposed can map high-resolution images to feature space without resolution loss. Second, we propose the Pyramid Transformer with transformer receptive field to extract local information and global information from images. The PPT module can map images into a set of multi-scale, multi-dimensional, and multi-angle features. We successfully apply the PPT module to different image fusion tasks and achieve the state-of-the-art. This proves that using a Fully-Transformer and designing a reasonably structure can represent the image features without loss information, demonstrating the effectiveness and universality of the PPT module. We believe that the propsoed PPT module has reference significance for low-level vision tasks and image generation tasks.

References

  • [1] V. Aslantas and E. Bendes (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Aeu-international Journal of electronics and communications 69 (12), pp. 1890–1896. Cited by: §V-C1.
  • [2] Q. Chen, H. Zhao, W. Li, P. Huang, and W. Ou (2019) Behavior sequence transformer for e-commerce recommendation in alibaba. In Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data, pp. 1–4. Cited by: §I.
  • [3] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov (2019)

    Transformer-xl: attentive language models beyond a fixed-length context

    .
    arXiv preprint arXiv:1901.02860. Cited by: §I.
  • [4] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §I.
  • [5] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Cited by: §I.
  • [6] Y. Fu, X. Wu, and T. Durrani (2021)

    Image fusion based on generative adversarial network consistent with perception

    .
    Information Fusion. Cited by: §I.
  • [7] Y. Fu and X. Wu (2021) A dual-branch network for infrared and visible image fusion. arXiv preprint arXiv:2101.09643. Cited by: §I.
  • [8] M. Haghighat and M. A. Razian (2014) Fast-fmi: non-reference image fusion metric. In 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–3. Cited by: §V-C1.
  • [9] S. Han, H. Li, H. Gu, et al. (2008) The study on image fusion for high spatial resolution remote sensing images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XXXVII. Part B 7, pp. 1159–1164. Cited by: §V-C1.
  • [10] Y. Han, Y. Cai, Y. Cao, and X. Xu (2013) A new image fusion performance metric based on visual information fidelity. Information Fusion 14 (2), pp. 127–135. Cited by: §V-C2.
  • [11] D. Hendrycks and K. Gimpel (2016) Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Cited by: §III-D.
  • [12] Y. Jiang, S. Chang, and Z. Wang (2021) Transgan: two transformers can make one strong gan. arXiv preprint arXiv:2102.07074. Cited by: §I.
  • [13] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv: Learning. Cited by: §V-B.
  • [14] B. S. Kumar (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal, Image and Video Processing 7 (6), pp. 1125–1143. Cited by: §V-C1, §V-C2.
  • [15] B. S. Kumar (2015) Image fusion based on pixel significance using cross bilateral filter. Signal, image and video processing 9 (5), pp. 1193–1204. Cited by: §V-C1.
  • [16] J. J. Lewis, R. J. Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah (2007) Pixel-and region-based image fusion with complex wavelets. Information fusion 8 (2), pp. 119–130. Cited by: §V-C1.
  • [17] H. Li, X. Wu, and T. Durrani (2020) NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Transactions on Instrumentation and Measurement. Cited by: §I, §V-C1.
  • [18] H. Li, X. Wu, and J. Kittler (2020) MDLatLRR: a novel decomposition method for infrared and visible image fusion. IEEE Transactions on Image Processing. Cited by: §V-C1.
  • [19] H. Li and X. Wu (2018) Densefuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28 (5), pp. 2614–2623. Cited by: §I, §II-B, §V-C1, §V-C2.
  • [20] S. Li, X. Kang, and J. Hu (2013) Image fusion with guided filtering. IEEE Transactions on Image processing 22 (7), pp. 2864–2875. Cited by: §V-C2.
  • [21] Y. Liu, X. Chen, H. Peng, and Z. Wang (2017)

    Multi-focus image fusion with a deep convolutional neural network

    .
    Information Fusion 36, pp. 191–207. Cited by: §V-C2.
  • [22] Y. Liu, S. Liu, and Z. Wang (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information fusion 24, pp. 147–164. Cited by: §V-C2.
  • [23] Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiere, and W. Wu (2011) Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE transactions on pattern analysis and machine intelligence 34 (1), pp. 94–109. Cited by: §V-C1.
  • [24] J. Ma, C. Chen, C. Li, and J. Huang (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion 31, pp. 100–109. Cited by: §V-C1.
  • [25] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang (2020) Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion 54, pp. 85–98. Cited by: §V-C1.
  • [26] J. Ma, H. Xu, J. Jiang, X. Mei, and X. Zhang (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing 29, pp. 4980–4995. Cited by: §V-C1.
  • [27] J. Ma, W. Yu, C. Chen, P. Liang, X. Guo, and J. Jiang (2020)

    Pan-gan: an unsupervised learning method for pan-sharpening in remote sensing image fusion using a generative adversarial network

    .
    Information Fusion. Cited by: §I.
  • [28] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion 48, pp. 11–26. Cited by: §I, §V-C1.
  • [29] V. Naidu (2011)

    Image fusion technique using multi-resolution singular value decomposition

    .
    Defence Science Journal 61 (5), pp. 479. Cited by: §V-C1.
  • [30] M. Nejati, S. Samavi, and S. Shirani (2015) Multi-focus image fusion using dictionary-based sparse representation. Information Fusion 25, pp. 72–84. Cited by: §V-A.
  • [31] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone (2007) Remote sensing image fusion using the curvelet transform. Information fusion 8 (2), pp. 143–156. Cited by: §V-C1.
  • [32] X. Pan, Z. Xia, S. Song, L. E. Li, and G. Huang (2020) 3D object detection with pointformer. arXiv preprint arXiv:2012.11409. Cited by: §I.
  • [33] C. Pei, Y. Zhang, Y. Zhang, F. Sun, X. Lin, H. Sun, J. Wu, P. Jiang, J. Ge, W. Ou, et al. (2019) Personalized re-ranking for recommendation. In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 3–11. Cited by: §I.
  • [34] H. Peng, F. Long, and C. Ding (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (8), pp. 1226–1238. Cited by: §V-C2.
  • [35] K. R. Prabhakar, V. S. Srikar, and R. V. Babu (2017) DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs.. In ICCV, pp. 4724–4732. Cited by: §I, §II-B, §V-C1.
  • [36] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI blog 1 (8), pp. 9. Cited by: §I.
  • [37] J. W. Roberts, J. A. van Aardt, and F. B. Ahmed (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing 2 (1), pp. 023522. Cited by: §V-C2.
  • [38] W. Song, C. Shi, Z. Xiao, Z. Duan, Y. Xu, M. Zhang, and J. Tang (2019)

    Autoint: automatic feature interaction learning via self-attentive neural networks

    .
    In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1161–1170. Cited by: §I.
  • [39] A. Toet et al. (2014) TNO image fusion dataset. Figshare. data. Cited by: §V-A.
  • [40] A. Toet (1989) Image fusion by a ration of low-pass pyramid.. Pattern Recognition Letters 9 (4), pp. 245–253. Cited by: §V-C1.
  • [41] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §V-C1.
  • [42] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling (2020) U2fusion: a unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §V-C1.
  • [43] H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo (2020) FusionDN: a unified densely connected network for image fusion.. In AAAI, pp. 12484–12491. Cited by: §V-A, §V-C1.
  • [44] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma (2020) Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity.. In AAAI, pp. 12797–12804. Cited by: §V-C1.
  • [45] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang (2020) IFCNN: a general image fusion framework based on convolutional neural network. Information Fusion 54, pp. 99–118. Cited by: §II-B, §V-C1, §V-C2.
  • [46] S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. Torr, et al. (2020) Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. arXiv preprint arXiv:2012.15840. Cited by: §I.
  • [47] Z. Zhou, B. Wang, S. Li, and M. Dong (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion 30, pp. 15–26. Cited by: §V-C1.
  • [48] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai (2020) Deformable detr: deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Cited by: §I.