In the task of image fusion, the robustness and the perception ability of dynamic contextual awareness have been the bottleneck of the application and promotion of existing image fusion technology, while the task of multi-source image fusion has strong robustness and the perception ability of contextual awareness. The research of cognitive psychology and neurobiology ParisiGermanI2019Cllw ; GuangYang2009Smds
shows that human beings have the ability of fast and continuous learning, which is closely related to the multi-task assisted learning mechanism guided by working memory and the characteristics of subjective attention. It is based on these two characteristics of human brain that makes human beings have strong robustness and dynamic contextual perception ability in various computer vision fields. In recent years, many image fusion algorithmsBavirisetti2016Two ; Lahoud2019FastZERO ; Liu2017InfraredCNN ; MaFusionGAN have been proposed inspired by biological characteristics, but visual attention is rarely studied by existing image fusion algorithms. The study of visual attention is more based on the difference of contrast, brightness and other information of the data itself to obtain the significant characteristic map Ma2017InfraredWLS ; Bavirisetti2016Two ; Lahoud2019FastZERO ; Liu2017InfraredJSR-SD , without considering the relationship between the top-down Ma2017Multi
task guided subjective visual attention characteristics and cross-modal image fusion task. In the task of image fusion, the existing image fusion algorithms, especially deep learning methods that lacks of ground truth labels, will carry out image fusion regardless of whether the image information is helpful to humans subjective intention or not. The main reasons include the following three aspects. First of all, the deep learning method has a serious dependence on the objective loss functionfang2019crossmodal , but at present, the complete image quality objective loss function has not been found for the image that lacks ground truth labels fang2019crossmodal . Secondly, the existing image fusion theory is more from the task of image fusion to improve the image quality. But image fusion is not only for humans subjective aesthetic needs, but also for the purpose of assisting human beings to complete specific tasks quickly and accurately through the complementary advantages of different image data in practical application tasks. Finally, in some image fusion tasks that need to ensure that human visual attention is not distracted (image fusion task of visual system in the aviation field, or infrared and visible image fusion task in military war, etc.), higher requirements are put forward for the improvement of image fusion theory. For example, in the aviation combined vision system (CVS) image fusion task, by the fusion of the enhanced vision system (EVS) image and the synthetic vision system (SVS) image, pilots can effectively improve the contextual perception ability of airport runway information. However, the CVS image fusion technology needs to meet the following two conditions: on the one hand, due to the limitations of airborne hardware, the real-time performance of image fusion has high requirements; on the other hand, the effect of image fusion must consider the characteristics of human attention to reduce the impact of scattered fusion information on human attention. In this case, the existing image fusion theory is no longer applicable. At the same time, when dealing with new tasks, human beings tend to work according to the guidance of subjective intention.
To solve these problems, we proposed an image fusion theory based on human cognitive psychology. This theory effectively combines humans subjective attention with image fusion task, reduces the amount of image fusion data, and improves the contextual awareness perception ability of image fusion. Our image fusion theory is not involved in the research process of the existing image fusion algorithm. Because our image fusion theory is based on the subjective attention guidance when people deal with specific tasks, our image fusion effect and the traditional image fusion effect will be different in the form of expression, But our fusion effect is in line with human visual characteristics, and it is helpful for the tasks that human beings are currently dealing with.
In order to demonstrate the superiority of our image fusion theory over the existing mainstream algorithmic framework, we give a representative example in 1 . In order to highlight the advantages of our algorithm framework, as shown in 1, we use the infrared and visible image data set to do qualitative comparison experiments. In order to facilitate the subjective visual contrast of image quality, we have also carried out image fusion operation for non visual areas of concern. In practical application, we have only enhanced fusion for areas of human subjective visual concern.
The main contributions of our work include the following three points:
First of all, we analyze the subjective attention characteristics of the human visual system, proposed an image fusion theory guided by human subjective visual attention mechanism, and on this basis, a cross modal subjective visual attention detection method is proposed.
Secondly, we analyze the characteristics of human brain using prior knowledge to auxiliary learning in new tasks, and the theory of cross-modal image fusion optimization based on multi-task auxiliary learning is proposed.
Then, aiming at the problem of global feature missing in the process of image fusion, we proposed an image fusion theory combining local feature and global feature.
Finally, based on the theory of multi-task auxiliary learning image fusion guided by subjective attention, we proposed an unsupervised learning network image fusion framework. The results show that our image fusion theory has stronger robustness and contextual awareness than the existing image fusion theory.
2 Related work
In this chapter, we will review the visual attention mechanism related to image fusion tasks, the mechanism of multi-task auxiliary learning, the related research work related to feature presentation, and the optimization theory of multi-task auxiliary image fusion tasks guided by human visual attention is proposed.
2.1 Visual attention
As a manifestation of human intention, visual attention is widely used in target detection WangW.2018SODD , object segmentation WangWenguan2015Sgvo ; WenguanWang2018SVOS . According to different sources, the existing visual attention can be divided into two types: bottom-up TheeuwesJ.2005Tabc ; YanYin2018Bsat attention model and top-down TheeuwesJ.2005Tabc ; Ma2017Multi ; YanYin2018Bsat visual attention model. The bottom-up unconscious visual attention is also called significant attention, which is mainly based on the difference of underlying features such as brightness, contrast and edge of image data itself. This attention does not need active intervention and has nothing to do with humans subjective intention. The top-down conscious attention is also called focused attention, which is related to humans subjective intention or task. In the task of image fusion, the current research on visual attention is mainly the method of visual saliency map and weighted least square optimization proposed by Zhang2015A for the task of infrared and visible image fusion, which mainly combines non-subsampled Shearlet transformation and visual saliency map for image fusion. Later, although some image fusion algorithms Lahoud2019FastZERO ; ZhangInfrared ; Bavirisetti2016Two ; ZhangInfrared ; Liu2017InfraredJSR-SD ; Ma2017InfraredWLS are proposed, the visual attention involved in these image fusion algorithms is only the use of the saliency feature map of the underlying data, which is no different from the method proposed by Zhang2015A in principle. At the same time, the research results of cognitive psychology also show that Treisman1980A , human visual attention has the characteristics of selection, which includes the selection of feature weight and feature region. In the aspect of feature weight selection, we filter according to the importance of features. For the effective feature channel HuJie2019SN , it gives more weight, and for the invalid feature channel, it gives less weight. The region selection of visual attention JaderbergM.2015Stn ; WangX.2018NNN is mainly determined by the weighting of all pixel position features rather than the pixel itself. Although these two characteristics have been widely used in many fields of computer vision, there are few related research results in the field of image fusion. In the field of image fusion, channel attention module is used for multi-focus image fusion task for the first time YanXiang2018UDMI ; fang2019crossmodal
. In this paper, we use a top-down task-guided attention model to introduce feature selection and region selection characteristics of visual attention in the process of cross-modal subjective attention detection.
2.2 Multi-task auxiliary learning
With the development of deep learning technology, some unsupervised deep learning methods are proposed for the task of cross-modal image fusion. For example, To solve the problem of insufficient image feature extraction, LiLi2018DenseFuse proposed a densefuse deep learning network framework for infrared and visible image fusion task. This method uses structural similarity index (SSIM) 1284395 and pixel loss as loss functions to reconstruct single-modal image, and the fusion criteria are only used in the test phase. Although the deep learning method is used in this method, there are two problems. Firstly, the fusion criterion is not added to the model training, which makes the network unable to learn the fusion weight of cross-modal images; secondly, only SSIM and pixel objective function are used as loss function in single-modal image reconstruction. Because of the visual masking effect of human visual perception system, SSIM image evaluation index is not enough to represent the image quality when the image quality degradation is serious. Aiming at the problems of traditional image fusion methods and deep learning methods, Li Li_2018DL ; Lahoud2019FastZERO proposed a deep learning image fusion method based on knowledge complementation between traditional method and deep learning method, the deep learning features are extracted through VGG19 model, and weighted average and maximum methods are still used in the fusion criteria. In order to improve the generality of image fusion algorithm, Zhang ZhangYu2020IAgi
proposed a general image fusion framework (IFCNN) based on the full convolution neural network. The method performs supervised learning training on multi-focus data sets, then changes image fusion criteria according to different image fusion tasks, and applies the pretraining weight directly to cross-modal image fusion tasks. Finally, the result of fusion is enhanced by the CLAHE algorithm. Although the method has achieved some results, but for different image fusion tasks, image noise data distribution is different, especially in the cross-modal image data set. Therefore, the weights trained on a single dataset cannot effectively simulate multiple different data noise distributions. As the same time, these image fusion algorithms focus on the feature extraction and combination through the network design, and they do not fundamentally study the learning optimization problem caused by the imperfect objective loss function of image fusion task. In view of this problem, although there are some image quality evaluation methods based on generative networkLinK.-Y.2018HNIQ and deep learning KimJongyoo2017Dbiq in the field of image quality evaluation, they are still quite different from human subjective evaluation. According to the relevant research of cognitive psychology MillerCPortex ; LiuDing2014Mpad , the human brain can reason and explore new target tasks according to working memory, so as to solve the common and characteristic characteristics of new tasks, which is the multi-task assistant learning characteristics of the human visual perception system. Working memory here is the transcendental knowledge learned by multitasking. Although the mechanism of multi-task auxiliary learning has achieved remarkable results in the computer vision tasks such asimage segmentation KendallAlex2018MLUU
, head pose detectionWangHaofan2019Hccf , no relevant research results have been found in the field of image fusion. This mechanism provides a new idea for image fusion theory. In the task of image fusion, by introducing the mechanism of multi-task auxiliary learning, the fusion process can be more in line with the mechanism of human visual perception; on the other hand, by introducing the mechanism of multi-task assisted learning, we can avoid the impact of imperfect target loss on the quality of image fusion, which provides a new idea for image fusion. Inspired by this, based on the previous research on the mechanism of multi-task auxiliary learning fang2019crossmodal , we introduce image enhancement task and subjective attention detection task into the task of cross-modal image fusion.
2.3 Global and local features
In the existing image fusion theory, in order to improve the quality of image fusion, people will combine the traditional image processing method with the deep learning method Lahoud2019FastZERO ; Li_2018DL ; Liu2017InfraredCNN ; Yu2017medicalCNN , because on the one hand, this method can overcome the problem that the traditional algorithm is not robust to extract features, on the other hand, it can also alleviate the problem of global feature missing in the end-to-end image fusion theory to a certain extent Lahoud2019FastZERO ; Li_2018DL ; Liu2017InfraredCNN ; Yu2017medicalCNN ; PrabhakarK.Ram2017DADU ; MaBoyuan2019SAUD . Although in the end-to-end image fusion network, ZhangYu2020IAgi ; Li2018DenseFuse proposes to optimize the convolution weight of the last layer of the pre-trained network as the initial weight of the first layer of convolution, so as to introduce global features, the extraction of high-level semantic features is not determined by a certain layer of convolution weight, but by the combination of multi-level convolution weight and pooling. This method only takes the high-level weight of the pre-trained model as the weight of the first layer of the image fusion network. Due to the lack of the pooling layer, the extracted features are still local features. As the same time, although many methods about global and local feature extraction (Pyramid Burt1987TheLP , Multi-Scale Transform Liu2015ALPSR , Wavelet Transform Chipman1995Wavelets ) are proposed in the deep learning method, and relevant methods have been widely used in the fields of target detection, target segmentation, target recognition, etc, but no relevant research results have been found in the existing deep learning image fusion algorithm. The main reason is that in order to prevent the loss of image details in the process of image fusion, the pooling layer is not introduced. In this paper, on the one hand, we use the convolutional neural network model to extract the local and global features of the image, on the other hand, we use the visual attention model to introduce the global features to a certain extent.
To sum up, based on the previous study of human visual selection characteristics, the auxiliary learning mechanism and subjective attention characteristics of the human visual perception system, we proposed the image fusion theory of multi-task collaborative optimization guided by human visual subjective attention, and built an end-to-end unsupervised learning network framework based on our image fusion theory. Our proposed image fusion theory of human visual attention guidance can alleviate the problem of single loss of image evaluation and perception of image contextual awareness to a certain extent. It can effectively guide the network to carry out intentional learning, and has greater advantages in practical application compared with the guidance of single loss function. At the same time, aiming at the problem of the lack of global information of the existing image fusion features, we combine the advanced semantic information with the upsampling operation to achieve the effective fusion of global and local features. By introducing human subjective intention into the image fusion task and simulating the learning mechanism of human vision system, the image fusion effect can have dynamic context-awareness ability, and its fusion effect will be more robust.
The rest of this paper is laid out as follows. In section 2, we discuss the existing research work and problems related to image fusion theory. In section 3, we analyze our image fusion theory and build a top-down unsupervised learning image fusion framework guided by subjective visual attention. In section 4, experiments and results analysis of different algorithms on different public datasets. And the results are qualitatively analyzed and discussed in Section 5, and the experimental conclusions are summarized in Section 6.
3 Proposed image fusion method
As shown in 2 , it is based on the image fusion theory proposed by us to build a top-down subjective attention guided multi-task assisted learning image fusion framework. The image fusion theory of multi-task auxiliary optimization of subjective visual attention guidance proposed by us needs to be completed in the following four steps: firstly, we proposed a detection and fusion method of cross-modal combined visual attention saliency map for cross-modal image fusion task; Then, we proposed the image fusion theory of visual attention guidance; secondly, we proposed a multi-task assisted learning optimization image fusion method. Finally, an end-to-end multi-task assisted learning image fusion network based on visual attention guidance is constructed.
3.1 Local and global feature extration
In the task of image fusion, the image fusion algorithm based on deep learning seldom considers the global features Lahoud2019FastZERO ; Li_2018DL ; Liu2017InfraredCNN ; Yu2017medicalCNN ; PrabhakarK.Ram2017DADU ; MaBoyuan2019SAUD , but the global features contain rich semantic information, which is as important as the local features. In order to solve this problem, we design a fusion module of global feature and local feature. As shown in Fig.3
, we first extract the low-level detail features and high-level semantic features from the pre-trained deep learning model, and then perform bilinear interpolation processing for the high-level semantic features, and then perform channel superposition and fusion with the low-level detail features.
In the Eq. (1), represents the bottom local feature of the fusion, which mainly includes the corner, edge, texture and other details of the image. represent the high-level semantic features of fusion. represents the local features of the i-th convolution layer of the pre-trained model. represents the top adoption operation of high-level semantic feature map. In this paper, we mainly use the feature map of the first and second layers of the pretrained deep learning model (VGG16, VGG19 and Resnet101, etc.) as the bottom local feature, and use the feature map of the third, fourth and fifth layers as the global feature.
3.2 Mul-task Auxiliary Learning mechanism
In the task of cross-modal image fusion, there are two primary problems. Firstly, there are few ground truth labels in cross-modal data set, so it is impossible to carry out supervised learning training; secondly, due to the difference of imaging attributes, cross-modal data lacks perfect image quality evaluation loss function. To a great extent, this restricts the practical application of cross-modal image fusion technology. According to the research of neuroscience and cognitive psychology MillerCPortex ; LiuDing2014Mpad , the prefrontal cortex of human brain has the ability of working memory and dynamic situational learning, that is, when dealing with new tasks, it often guides the new tasks according to the experience model established by the existing tasks, establishes the cognition of the new tasks, so as to achieve rapid learning. At the same time GuangYang2009Smds theoretical research shows that when facing new tasks, people are more likely to modify the existing deep neural network and add parameters, rather than establish a new network for learning every time they encounter new tasks. The two are not independent. It is the human multi-task auxiliary learning mechanism that ensures that human beings can learn quickly when facing new tasks. Therefore, in order to narrow the gap between the image fusion task and human, we introduce the human brain auxiliary learning mechanism in the image fusion task, and optimize the main task of image fusion through multi-task auxiliary learning. This method not only effectively avoids the problem of imperfect loss function of image quality evaluation, but also introduces humans subjective visual attention through multi-task auxiliary learning mechanism to guide the network learning fang2019crossmodal .
where represents task - at layer i; R represents the nonlinear activation layer, represents the convolution weight of task in layer network; represents the input of task in the network; represents the sum of tasks in layer convolution neural network fang2019crossmodal .
SSIM loss represents the similarity measurement of brightness, contrast and structure between the predicted image and the reference image. The higher the index, the higher the image similarity. The SSIM losses are shown in Eq. (4) .
where and represent the mean of x and y. ,
represent represent the standard deviation of x and y.
The perceptual loss function is mainly proposed to overcome the image smoothing and blur caused by MSE loss. The loss function uses advanced semantic information to solve MSE loss. The perceptual loss are shown in Eq. (5).
The edge loss function ZhaoTing2019PFAN is mainly used to learn the edge information of the image to be fused. The edge loss can be obtained by convolution of the predicted image and the reference image by the second-order differential Laplacian operator. On the basis of the edge image, the binary cross entropy (BCE) loss or SSIM loss can be performed. The edge loss combined with SSIM loss are shown in Eq. (6). The edge loss combined with BCE loss only needs to replace SSIM loss function.
where represent Laplace function, k represents Laplace convolution kernel.
In our proposed image fusion theory, there are two subtasks, one is top-down visual attention target region detection task, the other is image enhancement task. The top-down visual attention detection task is mainly used to detect the significant areas related to the subjective task, which is the basis for the subsequent image fusion. The image enhancement task is mainly used to enhance the local features of the objects concerned by human subjective intention. The loss function of the two sub-tasks is the same as that of the main task.
3.3 Image fusion
In order to make the effect of image fusion more in line with the subjective attention of human beings, we proposed an image fusion theory guided by the subjective attention of human vision. Humans subjective attention is mainly used to guide the network model to extract the most relevant information from a large number of data to be fused and give different display weights to the data with different relevance of tasks. By giving a large display weight to the current region of concern, we can effectively avoid the distraction of human beings attention from the significant information that is not helpful to the current task.
3.3.1 Cross-modal visual attention fusion detection method
In our algorithm framework, we mainly use the top-down task-related attention model, mainly because the bottom-up significance model can distract human beings task-oriented subjective attention to a certain extent, and the top-down visual attention model is affected by human beings own emotions, wills and external stimuli. In order to maintain the high attention of the landing mission, we try to avoid the feature fusion operation from the bottom-up significant region. In the existing top-down visual attention detection methods, more attention saliency map detection of single modal data WangW.2018SODD , and less cross-modal visual attention detection. Therefore, the existing attention detection model has a good performance in a specific data set, while when migrating to other modal data, the problem of false detection or missing detection often occurs in the region of attention. To solve this problem, a theory of attention detection based on cross modal image fusion is proposed As shown in Fig. 2, it is our attention detection network based on the attention detection theory of cross-modal image fusion.
In the top-down attention extraction network proposed by us, the cross-modal attention fusion module can switch according to different data tasks, including summation, weighted average, nonlinear to maximum and other fusion criteria. As shown in Eq.7, it is the physical model of our attention fusion.
In the Eq.7: represents the visual significance map of the i-th input image and and represents the fusion weight of the two images at the pixel . This parameter determines different fusion criteria, which can be specified manually or learned by learning optimization method. In this paper, we use the nonlinear and sum fusion criterion, which is obtained by deep learning network training.
3.3.2 Image fusion theory of visual attention guidance
In view of the problems existing in the existing image theory, a top-down subjective attention guided image fusion method is proposed. The method first performs subjective attention detection on images and of different modalities and gives different display weights to the attention region according to the relevance of the task. After the subjective attention image is fused, the subjective attention saliency map and the original image and the enhanced feature map are separately subjected to dot multiplication operations, and the results of the dot multiplication are used as the input of the cross-modal image fusion main task for convolutional neural network training to get the final fusion map. The image fusion process is shown in Fig.4.
The visual attention feature maps of different modalities are fused according to the characteristics of the task, and the result of the fusion is multiplied with the deep feature map to obtain a feature map with human subjective visual characteristics. As shown in Eq. (8), it is a mathematical model of image fusion guided by subjective attention.
3.4 Unsupervised Attention network
As shown in Fig.2, we build an unsupervised learning network framework based on the image fusion theory guided by human subjective attention. The network framework mainly includes one main task and two sub-tasks. The cross-modal image fusion task is the main task, which uses an end-to-end unsupervised learning network. In the main task network, our network is mainly composed of global feature and local feature extraction module, channel attention module, multi-scale feature extraction module and feature fusion module. In the main task training process, we used the original CVS image data set of 2,522, with a single resolution of 1280x1024, and expanded it to 7,566 through data enhancement. In order to increase the diversity of data, we added another 1800 pre-registered infrared and visible data set, with a single resolution of 640x480, and expanded it to 5,400 through data enhancement. In the main task training, all our images are input in the form of gray-scale image, and the image size is 256x256.
Sub-task 1 is a cross-modal subjective attention target detection task that uses an end-to-end supervised learning network. In this network, we introduce a channel attention module and a spatial attention module, which are used to select channel features and regional features. Our network is different from existing networks in that our attention detection network combines the advantages of different modal image data and effectively integrates features of different scales, global features, and local features. On the training data set, we used 1800 infrared and visible image data sets and 2522 CVS image data sets. These data sets include aligned original images and label images with subjective attention. But because it is cross-modal image data, neither of these two data sets has corresponding original ground truth labels. Through data augmentation, we obtained 12,966 training data sets and 5000 test data sets. Due to the limitation of computer memory, we adjusted the pre-processed image size to 256x256.
In order to enhance the visual attention region, we introduce sub-task 2 which is an end-to-end image enhancement network. In our image enhancement network, we use a stack self-encoding network to encode and decode images. The difference between this network framework and existing self-encoding networks is that we add densely connected modules and channels Attention module. In the sub-task 2 training phase, we used more than 70,000 training sets and 10,000 validation sets on the COCO2014 dataset. Due to the limitation of video memory, we adjusted the pre-processed image size to 256x256 fang2019crossmodal .
In order to avoid the main task loss function affecting the convolution weights of the sub-tasks, we first train the sub-tasks separately and fix the sub-task convolution weights. The output of the subtask is taken as a part of the relevant node of the main task.
4.1 Experiments Setup
In order to evaluate the robustness and generality of our algorithm, we performed relevant experimental evaluations on the CVS image data set, infrared and visible image data sets. First, the experimental comparison between our proposed image fusion algorithm framework and the existing algorithm framework in the CVS image data set, infrared and visible image data set. Then, we will evaluate the image quality of the two image data sets subjectively and objectively, and analyze the image data in detail.
In the first experiment, we first carried out experiments on the CVS image data set, which has 4000 pairs of original images. Secondly, we obtain infrared and visible images of natural scenes from RGBT-Saliency dataset WangG.2018Rsdb , which includes 821 pairs of infrared image, visible images and ground truth labels. In all experiments, we transform all images into gray-scale images for subsequent image fusion training. We will compare experiments with 18 mainstream algorithms such as fast-zero-learning (FZL) Lahoud2019FastZERO , deep learning (DL) Li_2018DL , generative adversarial network for image fusion (FusionGAN) Ma2018Infrared , laplacian pyramid (LP) Burt1987TheLP , dual-tree complex wavelet transform (DTCWT) Liu2015MultiDSIFT , multi-scale transform and sparse representation(LP-SR) Liu2015ALPSR , dense sift (DSIFT) Liu2015MultiDSIFT , convolutional neural network (CNN) Liu2017InfraredJSR-SD , curvelet transformation (CVT) Nencini2007RemoteCVT , bilateral filter fusion method (CBF) Shreyamsha2015ImageCBF , cross joint sparse representation (JSR) Zhang2013Dictionary , gradient transfer fusion (GTF) Ma2016InfraredGTF , a ratio of low pass pyramid (RP) Toet1989ImageRP , wavelet Chipman1995Wavelets , IFCNN ZhangYu2020IAgi , OURS, OURS+. In order to enhance the comparability of different image fusion algorithms, whether to add subjective attention is shown separately. The penultimate image fusion method is an image fusion effect that combines global features, local features, channel attention mechanism and image enhancement features. On this basis, the last image fusion method adds the fusion effect of human subjective attention mechanism. These algorithms have already published their code, and the relevant algorithm parameters are the same according to the settings in the public paper, and our paper-related procedures and data will then be published on github. For our proposed algorithm, we also conducted a comparative experiment on whether there is a channel attention module or not. Our experimental platform is desktop 3.0GHZ i5-8500, RTX2070, 32G memory fang2019crossmodal .
4.2 Image fusion experiment of different data sets
4.2.1 CVS image fusion experiment
As shown in Fig. 5,Fig. 9, we can find our image fusion effect compared with the existing image fusion algorithm, it has a better subjective feeling, a clearer boundary, and no image fusion shake effect. In terms of image quality alone, the image fusion effect of the WAVELET, DL, FZL algorithms is better than the existing image fusion algorithms. Other image fusion algorithms, such as CNN, CVT, DTCWT, LP, etc., fuse well when the runway region is small, but when the runway region is large, an obvious image fusion shock effect will appear. Of course, image fusion operators such as CNN, DL, FZL, ifcnn have smaller image fusion noise than our image fusion effect, because on the one hand, the original image contains a lot of image noise, on the other hand, CNN, DL, FZL, ifcnn and other image fusion algorithms include image denoising algorithms, such as Guided Filtering (GF) Shutao2013ImageGF , Contrast-Limited Adaptive Histogram Equalization (CLAHE) Zuiderveld:1994:CLA:180895.180940 ,etc. However, in the CVS image fusion task, it is not advisable to directly filter the EVS image, because the EVS image contains the real-time information of the airport runway. When it is far from the landing point, the size of the real-time dynamic information on the runway will be very small, sometimes it will be covered by noise data. The existing filtering algorithm will give the real-time dynamic information on the airport runway to Therefore, we can only filter SVS image data. Through the image fusion effect map of visual attention guidance proposed by us, our subjective attention can be focused on the runway region clearly, which effectively reduces the distraction of the non runway region to the pilot’s attention.
4.2.2 Infrared and visible image fusion experiment
From Fig.7, we can see that when the image quality is seriously degraded, the existing image fusion algorithms, whether the traditional image fusion algorithm, or the deep learning image fusion algorithm, or the image fusion method combining the traditional method and the deep learning method, the image fusion effect is very poor, but our image fusion theory even if it does not add human’s subjective attention before that, the texture details of the image can still be recovered better. After adding human’s subjective attention, we can find that our image fusion effect is more helpful for human’s current search task. From Fig.8, we can see that when the image quality is great, the existing image fusion algorithms, whether the traditional image fusion algorithm, or the deep learning image fusion algorithm, or the image fusion method combining the traditional method and the deep learning method, the image fusion effect has been significantly improved, but there is a little difference compared with our image fusion effect. The existing image fusion algorithm has the fuzzy effect of image fusion boundary, but our image fusion boundary is very clear. After adding human’s subjective attention mechanism, our image fusion effect fully retains the advantage information of both infrared and visible images. At the same time, there are several algorithms for multi-focus data sets in our comparison algorithm, such as DSIFT, IFCNN. Of course, there are some general image fusion algorithms in our comparison algorithms, such as IFCNN and FZL. In the case of poor original image quality, the performance of these algorithms is poor, but when the image quality is good, these algorithms will have better performance than other algorithms, but the fusion effect is not the best. For example, DSIFT, FZL, or IFCNN, these algorithms are all feature engineering artificial design image fusion processes, to a certain extent, they add human beings prior knowledge of the current task, so there will be some commonality in multiple data sets. However, different tasks of data sets have different data noise, which is not enough to make the best image fusion effect in the process of artificial design. Even if IFCNN algorithm, although the fusion criteria are specified manually, to a certain extent, human prior knowledge is added, but its fusion weight is fixed and does not have the learning ability of dynamic tasks, when the fusion data is complex, image fusion effect will also be poor. Through the combination of multi-task auxiliary learning mechanism and human subjective visual attention, we can effectively improve the dynamic contextual perception ability of image fusion.
4.3 Fusion metrics
In order to qualitatively evaluate the performance of different algorithms, we mainly use six objective evaluation indexes of image: cumulative probability of blur detection (CPBD)NarvekarN.D2011ANIB , just perceptible blur based on human vision (JNB) FerzliR2009ANOI , visual information fidelity (VIF) Han2013A , average gradient (AG) Cui2015Detail , SSIM 1284395 , mutural information (MI) Qu2002Information . We have carried out quantitative experiments on the infrared and visible image data set and CVS image data set.
From Fig.9, we can clearly find that almost all the indexes of our image fusion algorithm are not the best, which does not mean that our image fusion algorithm is not good. From Fig.10, we can also find that there are similar problems. In the second group of image fusion data, the effect of dsift and CBF image fusion algorithm is the worst, but their EN, AG, SSIM and MI are all very high. First of all, we analyze the image fusion effect corresponding to each image evaluation index. It is not difficult to find that although the relevant image quality evaluation index is very high, but their corresponding image subjective quality is very poor, such as LP-SR, CBF, MSVD, RP, GTF, etc. This is because there is still a big gap between the existing image quality evaluation indexes and the subjective visual evaluation methods of human beings. Secondly, SSIM, VIFF and MI indexes are based on the ground truth labels for quality evaluation. In this case, the closer the image fusion data is to the real reference image data, the greater the corresponding values of these three indexes will be. But in the task of cross-modal image fusion, due to the lack of real reference image, these three kinds of image quality evaluation indexes can not be used as the standard of image quality evaluation to some extent, and can not simply evaluate the image quality by the size of the three evaluation indexes. They are just a measure of the similarity between the fused image and cross-modal data. Although IFCNN algorithm has some advantages in the second group of image objective evaluation data, which is mainly caused by an abnormal value of its JNBM evaluation index, the overall image quality is still far from our fusion effect in terms of clarity.
On the one hand, in order to prove the superiority of our image fusion theory, we subjectively evaluate the fusion effect of 18 algorithms, and use mean opinion score (MOS) as the evaluation index of image quality. In this experiment, we invited 10 professors, doctoral researchers and other professional researchers of computer image processing to participate in the subjective quality evaluation of the fusion image of different algorithms. In the experimental data processing, we remove the highest and lowest scores of MOS, and take the average score of each data set test data as the final evaluation score. The experimental results are shown in Fig.11.
From Fig.11, we can find that our image fusion quality score is the highest in CVS image data set or infrared and visible image data set. This further proves the correctness of our proposed image fusion theory in the scheme and the accuracy of the above analysis.
From a large number of experiments in the fourth chapter, it is verified that the image fusion theory guided by subjective visual attention proposed by us has stronger robustness and contextual awareness perception ability than the existing image fusion algorithms. We think there are several main reasons: first, the introduction of human visual subjective attention mechanism. Many existing image fusion theories take improving image quality as their ultimate goal. Due to the lack of subjective attention of human vision as guidance, and the imperfection of existing image quality evaluation indicators and the lack of real tags, this determines that the image fusion process is seriously lacking in contextual awareness perception, and the algorithm does not know the number of different It is more helpful for the current task to extract and fuse what kind of features. To solve this problem, compared with the traditional image fusion algorithm, deep learning method is particularly prominent. This is because the traditional algorithm is designed for a specific fusion task, so to a certain extent, the subjective intention of people is added. Although the deep learning algorithm has a strong advantage over the traditional algorithm in feature representation and relationship fitting, whether the deep network model can learn the features consistent with human subjective visual attention and whether the loss function is constructed reasonably is closely related. Therefore, human beings subjective visual attention can effectively guide the network to understand human intention and learn to learn for different tasks by combining the strong feature representation ability and nonlinear fitting ability of deep learning. Secondly, the auxiliary learning characteristics of human visual perception system. In the image fusion task, the existing image fusion algorithm based on deep learning has a serious dependence on the loss function on the one hand, and on the other hand, it is more of a single image fusion task deep learning method. In the task of cross-modal image fusion, the existing loss function of image fusion can not effectively guide the network to extract features and fit relationships, so we introduce the method of multi-task assisted learning. When single task network training and learning, it is often affected by data noise, insufficient training data, cross-modal and improper loss function, which leads to some hidden features of data can not be learned. Through auxiliary task learning, the learning ability of main task can be effectively optimized. In our network framework, both the reconstruction task and the visual attention detection task can be regarded as the sub-tasks of the main task of image fusion, and the experimental results also prove the effectiveness of the method. Finally, the combination of global features and local features. Global features often contain advanced semantic features of image, while local features contain more detailed texture information of image. The effective combination of the two can improve the representation ability of image.
Based on the robustness and contextual awareness of the human visual perception system, we proposed an cross-modal image fusion theory guided by human subjective visual attention. The biggest difference between our image fusion theory and current mainstream algorithms is: first of all, our image fusion theory is based on human subjective visual attention guidance, rather than the traditional image fusion theory. Our image fusion effect is more conducive to assist human decision-making in practical tasks. Secondly, the auxiliary learning mechanism is introduced into the image fusion task, which effectively optimizes the image fusion task and solves the image fusion problem caused by the loss function of image quality evaluation to a certain extent. Finally, the image fusion theory proposed in this paper is based on unsupervised learning method, does not need ground truth labels, and improves the universality of the algorithm to a certain extent. A large number of experiments show that our image fusion theory has stronger robustness and contextual awareness than the existing mainstream image fusion theory. Although our algorithm framework does not fully simulate human visual perception characteristics, our simulation of human visual perception characteristics is in line with the mechanism of the human visual system. In the task of cross-modal image fusion, although our image fusion theory has achieved relatively good results compared with the existing algorithms, there are still the following problems. First of all, we have only carried out experiments in the combination of CVS image data sets, infrared and visible image data sets, which need to be further extended to more image fusion tasks in the later stage. Secondly, in the image fusion task, we need to further deepen the research of working memory and contextual dynamic perception module, which is necessary for the future intelligent image fusion theory.
This work was supported by the National Natural Science Foundation of China under Grants nos. 61871326, and the Shanxi Natural Science Basic Research Program under Grant no. 2018JM6116, and National Natural Science Foundation of China under Grants nos. 61231016.
- (1) G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, S. Wermter, Continual lifelong learning with neural networks: A review, Neural Networks 113 (2019) 54–71.
- (2) G. Yang, F. Pan, W.-B. Gan, Stably maintained dendritic spines are associated with lifelong memories, Nature 462 (7275).
- (3) D. P. Bavirisetti, R. Dhuli, Two-scale image fusion of visible and infrared images using saliency detection, Infrared Physics & Technology 76 (2016) 52–64.
F. Lahoud, S. Süsstrunk,
Fast and efficient
zero-learning image fusion, arXiv.org.
- (5) Y. Liu, X. Chen, J. Cheng, H. Peng, Z. Wang, Infrared and visible image fusion with convolutional neural networks, International Journal of Wavelets Multiresolution & Information Processing 16 (3) (2017) 1–20.
- (6) M. Jiayi, Y. Wei, L. Pengwei, L. Chang, J. Junjun, Fusiongan: A generative adversarial network for infrared and visible image fusion, Information Fusion 48 (2019) 11 – 26. doi:https://doi.org/10.1016/j.inffus.2018.09.004.
- (7) J. Ma, Z. Zhou, B. Wang, H. Zong, Infrared and visible image fusion based on visual saliency map and weighted least square optimization, Infrared Physics & Technology 82 (2017) 8–17.
- (8) C. Liu, Y. Qi, W. Ding, Infrared and visible image fusion method based on saliency detection in sparse domain, Infrared Physics & Technology 83 (2017) 94–102.
- (9) K. T. Ma, L. Li, P. Dai, J. H. Lim, Z. Qi, Multi-layer linear model for top-down modulation of visual attention in natural egocentric vision, in: 2017 IEEE International Conference on Image Processing (ICIP), 2017.
- (10) A. Fang, X. Zhao, Y. Zhang, A cross-modal image fusion theory guided by human visual characteristics (2019). arXiv:1912.08577.
- (11) W. Wang, J. Shen, X. Dong, A. Borji, Salient object detection driven by fixation prediction, IEEE Computer Society, 2018, pp. 1711–1720.
W. Wang, J. Shen, F. Porikli,
video object segmentation
, in: IEEE Conference on Computer Vision and Pattern Recognition. Proceedings, Vol. 07-12-, 2015, pp. 3395–3402.
- (13) W. Wang, J. Shen, R. Yang, F. Porikli, Saliency-aware video object segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (1) (2018) 20–33.
- (14) J. Theeuwes, L. Itti, J. Fecteau, Top-down and bottom-up control of visual selection., Acta Psychologica 135 (2) (2005) 77–99.
Y. Yan, L. Zhaoping, W. Li,
Bottom-up saliency and
top-down learning in the primary visual cortex of monkeys, Proceedings of
the National Academy of Sciences of the United States of America 115 (41).
- (16) B. Zhang, X. Lu, H. Pei, Y. Zhao, A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform, Infrared Physics & Technology 73 (2015) 286–297.
- (17) Z. Xiaoye, M. Yong, F. Fan, Z. Ying, H. Jun, Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition, Journal of the Optical Society of America. A, Optics, Image Science, and Vision 34 (8) (2017) 1400–1410.
- (18) A. M. Treisman, G. Gelade, A feature-integration theory of attention, Cognitive Psychology 12 (1) (1980) 97–136.
- (19) J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE Transactions on Pattern Analysis and Machine Intelligence PP (99) (2019) 1–1.
M. Jaderberg, K. Simonyan, A. Zisserman, K. Kavukcuoglu, Spatial transformer networks, Vol. 2015-, Neural information processing systems foundation, 2015, pp. 2017–2025.
- (21) X. Wang, R. Girshick, A. Gupta, K. He, Non-local neural networks, IEEE Computer Society, 2018, pp. 7794–7803.
X. Yan, S. Gilani, A. Mian,
multi-focus image fusion, arXiv.org.
- (23) H. Li, X. J. Wu, Densefuse: A fusion approach to infrared and visible images, IEEE Transactions on Image Processing 28 (5) (2018) 2614–2623.
- (24) Zhou Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing 13 (4) (2004) 600–612. doi:10.1109/TIP.2003.819861.
- (25) H. Li, X.-J. Wu, J. Kittler, Infrared and visible image fusion using a deep learning framework, 2018 24th International Conference on Pattern Recognition (ICPR)doi:10.1109/icpr.2018.8546006.
- (26) Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, L. Zhang, Ifcnn: A general image fusion framework based on convolutional neural network, Information Fusion 54 (2020) 99–118.
- (27) K.-Y. Lin, G. Wang, Hallucinated-iqa: No-reference image quality assessment via adversarial learning, IEEE Computer Society, 2018, pp. 732–741.
- (28) J. Kim, S. Lee, Deep blind image quality assessment by employing fr-iqa, in: 2017 IEEE International Conference on Image Processing (ICIP), Vol. 2017-, IEEE, 2017, pp. 3180–3184.
- (29) E. K. Miller, J. D. Cohen, An integrative theory of prefrontal cortex function, Annual Review of Neuroscience 24 (1) (2001) 167–202, pMID: 11283309. doi:10.1146/annurev.neuro.24.1.167.
D. Liu, X. Gu, J. Zhu, X. Zhang, Z. Han, W. Yan, Q. Cheng, J. Hao, H. Fan,
R. Hou, Z. Chen, Y. Chen, C. T. Li,
activity during delay period contributes to learning of a working memory
task., Science (New York, N.Y.) 346 (6208) (2014) 458–463.
A. Kendall, R. Cipolla,
using uncertainty to weigh losses for scene geometry and semantics,
H. Wang, Z. Chen, Y. Zhou,
classification for head pose estimation, arXiv.org.
- (33) L. Yu, C. Xun, J. Cheng, P. Hu, A medical image fusion method based on convolutional neural networks, in: International Conference on Information Fusion, 2017. doi:10.23919/ICIF.2017.8009769.
- (34) K. R. Prabhakar, V. S. Srikar, R. V. Babu, Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs, in: 2017 IEEE International Conference on Computer Vision (ICCV), Vol. 2017-, IEEE, 2017, pp. 4724–4732.
B. Ma, X. Ban, H. Huang, Y. Zhu,
unsupervised deep model for multi-focus image fusion, arXiv.org.
- (36) P. J. Burt, E. H. Adelson, The laplacian pyramid as a compact image code, Readings in Computer Vision 31 (4) (1987) 671–679.
- (37) Y. Liu, S. Liu, Z. Wang, A general framework for image fusion based on multi-scale transform and sparse representation, Information Fusion 24 (2015) 147–164.
- (38) L. J. Chipman, T. M. Orr, L. N. Graham, Wavelets and image fusion, in: International Conference on Image Processing, 1995.
J. Li, F. Fang, K. Mei, G. Zhang, Multi-scale residual network for image super-resolution, Vol. 11212, Springer Verlag, 2018, pp. 527–542.
- (40) J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks (2017). arXiv:1709.01507.
- (41) J. Johson, A. Alahi, L. Fei Fei, Perceptual losses for real-time style transfer and super-resolution, Vol. 9906, Springer, 2016.
T. Zhao, X. Wu, Pyramid
feature attention network for saliency detection, arXiv.org.
- (43) G. Wang, C. Li, Y. Ma, A. Zheng, J. Tang, B. Luo, Rgb-t saliency detection benchmark: Dataset, baselines, analysis and a novel approach, Vol. 875, Springer Verlag, 2018, pp. 359–369.
- (44) J. Ma, Y. Ma, C. Li, Infrared and visible image fusion methods and applications: A survey, Information Fusion 45 (2019) 153 – 178. doi:https://doi.org/10.1016/j.inffus.2018.02.004.
- (45) Y. Liu, S. Liu, Z. Wang, Multi-focus image fusion with dense sift, Information Fusion 23 (C) (2015) 139–155.
- (46) F. Nencini, A. Garzelli, S. Baronti, L. Alparone, Remote sensing image fusion using the curvelet transform, Information Fusion 8 (2) (2007) 143–156.
- (47) S. Kumar, B. K., Image fusion based on pixel significance using cross bilateral filter, Signal Image & Video Processing 9 (5) (2015) 1193–1204.
- (48) Q. Zhang, Y. Fu, H. Li, J. Zou, Dictionary learning method for joint sparse representation-based image fusion, Optical Engineering 52 (5) (2013) 7006.
- (49) J. Ma, C. Chen, C. Li, J. Huang, Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion 31 (C) (2016) 100–109.
- (50) A. Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters 9 (4) (1989) 245–253.
V. P. S. Naidu, Image fusion technique using multi-resolution singular value decomposition, Defence Science Journal 61 (5) (2011) 479–484.
- (52) S. Li, K. Xudong, J. Hu, Image fusion with guided filtering, IEEE Transactions on Image Processing 22 (7) (2013) 2864–2875.
K. Zuiderveld, Graphics
gems iv, Academic Press Professional, Inc., San Diego, CA, USA, 1994, Ch.
Contrast Limited Adaptive Histogram Equalization, pp. 474–485.
- (54) N. D. Narvekar, L. J. Karam, A no-reference image blur metric based on the cumulative probability of blur detection (cpbd), IEEE Transactions on Image Processing 20 (9) (2011) 2678–2683.
- (55) R. Ferzli, L. Karam, A no-reference objective image sharpness metric based on the notion of just noticeable blur (jnb), IEEE Transactions on Image Processing 18 (4) (2009) 717–728.
- (56) Y. Han, Y. Cai, Y. Cao, X. Xu, A new image fusion performance metric based on visual information fidelity, Information Fusion 14 (2) (2013) 127–135.
- (57) G. Cui, H. Feng, Z. Xu, Q. Li, Y. Chen, Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition, Optics Communications 341 (341) (2015) 199–209.
- (58) G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion, Electronics Letters 38 (7) (2002) 313–315.