Attention-Guided Progressive Neural Texture Fusion for High Dynamic Range Image Restoration

High Dynamic Range (HDR) imaging via multi-exposure fusion is an important task for most modern imaging platforms. In spite of recent developments in both hardware and algorithm innovations, challenges remain over content association ambiguities caused by saturation, motion, and various artifacts introduced during multi-exposure fusion such as ghosting, noise, and blur. In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration model which aims to address these issues within one framework. An efficient two-stream structure is proposed which separately focuses on texture feature transfer over saturated regions and multi-exposure tonal and texture feature fusion. A neural feature transfer mechanism is proposed which establishes spatial correspondence between different exposures based on multi-scale VGG features in the masked saturated HDR domain for discriminative contextual clues over the ambiguous image areas. A progressive texture blending module is designed to blend the encoded two-stream features in a multi-scale and progressive manner. In addition, we introduce several novel attention mechanisms, i.e., the motion attention module detects and suppresses the content discrepancies among the reference images; the saturation attention module facilitates differentiating the misalignment caused by saturation from those caused by motion; and the scale attention module ensures texture blending consistency between different coder/decoder scales. We carry out comprehensive qualitative and quantitative evaluations and ablation studies, which validate that these novel modules work coherently under the same framework and outperform state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 7

page 8

page 9

05/22/2021

ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging

In this paper, we present an attention-guided deformable convolutional n...
09/26/2019

Multi-scale Dynamic Feature Encoding Network for Image Demoireing

The prevalence of digital sensors, such as digital cameras and mobile ph...
07/20/2021

Attention-Guided NIR Image Colorization via Adaptive Fusion of Semantic and Texture Clues

Near infrared (NIR) imaging has been widely applied in low-light imaging...
08/03/2021

Wavelet-Based Network For High Dynamic Range Imaging

High dynamic range (HDR) imaging from multiple low dynamic range (LDR) i...
09/23/2021

Cross Attention-guided Dense Network for Images Fusion

In recent years, various applications in computer vision have achieved s...
06/04/2021

Hybrid attention network based on progressive embedding scale-context for crowd counting

The existing crowd counting methods usually adopted attention mechanism ...
08/22/2021

Image Inpainting via Conditional Texture and Structure Dual Generation

Deep generative approaches have recently made considerable progress in i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The intensity of light rays vary in a great range in natural scenes. Under a common outdoor scenario, the luminance variation covers the range of . After millions of years of evolution, the human iris and brain are able to constantly adapt and adjust the responses to such strong stimulant variations, and perceive through the bright and dark contents of the scene. Most camera sensors, however, only cover a dynamic range of around , which makes single image capture prone to show either over-exposed or contrast-constrained, noise-inflicted pixels.

To achieve High Dynamic Range (HDR) imaging, there are two practical strategies. The first strategy is to work in the radiance domain. By designing the Camera Response Functions (CRF), the sensor sensitivity for certain luminance ranges can be compressed so that a broader dynamic range can be covered [17]; however, this strategy sacrifices the imaging quality for potential target intensity ranges. Dedicated optical systems have been designed to capture HDR snapshots directly [24, 31, 28]. These systems are generally robust against camera and scene motion; however they are too bulky and expensive to be accepted by the consumer markets. In addition, with the development of semiconductor manufacturing technologies, pixel-level sensor structures can now be designed in which sensing area under the same color filter unit is split into pixels with different exposure settings [3]. Though these sensor systems alleviate the alignment issues between the LDR pixels, the image resolution is traded-off for higher dynamic range. Additionally, the differences in exposure settings cause additional challenges (e.g., longer exposures introduce motion blur, while shorter ones are subject to strong sensor noise), which require advanced fusion algorithms to address.

The second strategy is to work in the image domain, i.e., via fusing a sequence of differently exposed (Low Dynamic Range) LDR exposures [21]. The challenges for satisfactory fusion lie in the content association ambiguities caused by saturation, and the motion from both camera and dynamic objects. Both active measures such as flow-based pixel alignment [13], and preventive measures such as attention masking [34] and patch-based decomposition [34] have been investigated to tackle these challenges. Active measures strive to align pixels displaced by camera and object motion; however, they cannot handle regions with correspondence ambiguities from occlusion, non-rigid transformation, and specifically, saturation. These ambiguities result in warping artifacts caused by wrong correspondence predictions. Preventive measures passively avoid incorporating differing textures from fusion. Attention to motion helps to avoid ghosting artifacts, but it also limits useful information to be transferred from well-exposed references. Fusing artifacts such as halo [23] and blur also occur often.

In conclusion, the limitations and challenges for existing fusion methods can be summarized into the following three aspects: first, how to differentiate misalignment ambiguities caused by saturation and motion, and subsequently adopt different strategies for the two, i.e., texture transfer and signal regularization; second, how to accurately locate reference information over the ambiguities caused by saturation–especially when the saturation area is large and when it is overlapped with motion; and finally, how to fully explore the characteristic information from different captures for general vision enhancement goals such as noise reduction and avoidance of common fusion artifacts, e.g., halo and blur.

In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) framework for HDR restoration which addresses the challenges of motion-induced ghosting artifacts prevention and texture transfer over saturated regions efficiently within one framework. Both qualitative and quantitative evaluations validate the advantages of our method against existing solutions. The novelty and technical contributions of this work can be generalized as follows:

  • we propose an efficient two-stream structure which separately focuses on texture feature transfer over saturated regions and fusion of motion suppressed multi-exposure features. A Progressive Texture Blending (PTB) module is designed to blend the encoded features in a multi-scale and progressive manner, and produces the final restoration results;

  • we propose a novel and efficient Neural Feature Transfer (NFT) mechanism which establishes spatial correspondence between different exposures based on multi-scale VGG features in the Masked Saturated HDR (MS-HDR) domain. This mechanism provides discriminative contextual clues over the ambiguous image areas–especially over large saturation areas overlapped with motion–and provides accurate texture reference;

  • we introduce several novel attention mechanisms, i.e., the Motion Attention module detects and suppresses the content discrepancies among the reference images; the Saturation Attention module facilitates differentiating the misalignment caused by saturation from those caused by motion–therefore encouraging texture transfer to regions with missing contents; and the Scale Attention module ensures texture blending consistency between different decoder scales. These attention mechanisms cooperate well under the same framework and greatly improve the restoration performance as to be validated in our ablation studies.

The rest of the paper is organized as follows: Sec. II introduces related works, Sec. III explains the details of the proposed method, Sec. IV-A comprehensively evaluates the proposed model and compares with existing methods, ablation studies are carried out in Sec. IV-B, and Sec. V concludes the paper.

Ii Related Works

Suppose we have a sequence of 3 differently exposed images of the same target scene , where subscripts , , and stand for short, medium, and long exposures, respectively. The most straightforward operation to produce a fused HDR is via pixel weighted summation:

(1)

where is the weight related to each pixel’s intensity value and sensor noise characteristics [13]. stands for an operator that brings all images to the same exposure energy level. Eq. (1) assumes all pixels are perfectly aligned between the images, which is generally not true due to various factors such as camera motion and dynamic scene contents. Numerous methods have been proposed over recent years to alleviate the ghosting artifacts, and these methods can be generalized into the following categories.

Ii-a Pixel Rejection Methods

One direct way to reduce ghosting artifacts is to choose one image as reference, and detect motion areas between the reference and non-reference images, and exclude these pixels during fusion. Usually, the medium exposure image is chosen as a reference. The problem of ghost detection is similar to those of motion detection, with the added challenge that scene contents could be visually different under different exposure settings. To counter such a challenge, gradient maps [36] and the median threshold bitmaps [26] have been used for inconsistency detection. Efforts are seen in using mathematical models to optimize a correct ghost map [10]. Rank minimization techniques have also been investigated by Lee et al. [15] to ensure a high quality fusion. By rejecting the ghosting pixels, these methods lost the valuable information from these areas at the same time. Ma et al. [22] proposed a structural patch decomposition approach which decomposes image patches into three components: strength, structure, and mean intensity. The three patch components are processed separately and then fused with good ghost removal effect. Li et al. further enhanced this structural patch decomposition approach by reducing halo [18] and preserving edges [16].

Ii-B Content Association and Registration Methods

Another line of work aims at aligning the pixels before HDR fusion. Kang et al. [14] register the pixels between video frames using optical flow [19] and merge the associated pixels to reduce artifacts. Jinno and Okuda [12]estimate the pixel displacements, occlusion, and saturated regions with a Markov random field model. Oh et al. [25]

simultaneously align LDR images and detect outliers that break the rank-1 structure of LDR images for robust HDR fusion. Precise association of pixels between instances with large motion is a challenging problem by itself, and unavoidable alignment artifacts are difficult to avoid in a pixel-level framework.

Fig. 1: The system diagram of our proposed network. (a) and (b) give the details for the calculation of the Motion-Suppressed Reference Exposure Features and the Motion-Suppressed Saturation Clue Features, respectively. (c) gives the overall system structure with features from (a) and (b) as inputs. The detail for the Neural Texture Matching module can be found in Fig. 3.

Ii-C DNN based methods

Deep Neural Networks show their advantages in a wide range of computational imaging and image restoration problems

[2, 35]. Wu et al. [33] formulated HDR imaging as an image translation problem. Missing contents caused by occlusion, over-/under-exposure are hallucinated. Eilertsen et al. [5]

proposed to predict an HDR image based on a single LDR input with an autoencoder structure. Endo et al.

[6] achieved the same target by combining multiple intermediate LDR predictions from a single LDR using DNNs. For these methods, since details are added based on knowledge learned from distributions of other images in the training dataset, the predictions might be incorrect for a specific image. Kalantari et al. [13]

use DNNs to merge and refine the LDR images based on pre-aligned image tensors with optical flow. Besides the possible alignment error, this popular method is limited due to its constrained mapping space. Yan et al.

[34]

proposed to guide the merging of LDR images via an attention model over the reference image. Deng et al.

[4]

proposed a deep coupled feedback network to achieve multiple exposure fusion and super-resolution simultaneously. Attention has shown to be an extremely useful tool for computer vision problems which boosts the robustness of the network by allowing models to focus on only the relevant information. However, when attention maps are used to highlight reference content inconsistency, they suppress ghosting artifacts at the expense of useful texture from being transferred to saturated regions.

In general, current state-of-the-art solutions show satisfactory performances in avoiding ghosts after LDR fusion. The performance of transferring textures and colors from ambiguous regions with motion is limited. Fusion quality issues such as color fidelity and signal noisiness–which are the main challenges for mobile imaging platforms–have not been addressed systematically in one framework.

Fig. 2: Exposure domain transform from LDR domain to Saturated HDR (S-HDR) domain. Histograms for both and captures have been plotted to highlight the dynamic range change. The histograms of both captures in S-HDR domain show very similar distributions.

Iii Proposed Method

Given a multi-exposure LDR image sequence , our model aims at restoring a well-exposed HDR image , whose contents are accurately aligned to the medium exposed image , with over-saturated pixels in compensated by references from , and under-exposed regions regularized by :

(2)

here is the set of model parameters to be learned. and indicate the spatial resolution, and indicates the image channel number, respectively.

The system diagram of our proposed Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration framework is shown in Fig. 1(c). The system consists of three main sub-modules:

  • the Multi-Exposure Fusion (MEF) module fuses signals from different exposure levels and maps them to an optimal regularized signal subspace;

  • the Neural Feature Transfer (NFT) module establishes the spatial correspondence between different images based on encoded VGG features in the Masked Saturated HDR (MS-HDR) domain, which provides discriminative contextual clues over the missing contents; and,

  • the Progressive Texture Blending (PTB) module blends the encoded texture features to the main fusion stream in MEF in a multi-scale and progressive manner, and produces the final restoration results.

Throughout the system, we have incorporated several attention mechanisms to ensure the consistency of the fusion process, i.e., the Motion Attention modules , the Saturation Attention modules , and the Scale Attention modules. The detail of these modules will be elaborated in the following subsections.

Iii-a The Multi-Exposure Fusion Module

The input LDR image sequence is first transformed to the HDR domain with gamma correction and energy normalization according to:

(3)

The gamma correction process () transforms the LDR images from a domain which is visually appealing to our eyes to a linear domain directly captured by the camera sensors [27]. Here indicates the respective exposure time for , and the normalization brings all LDR images to the same exposure energy level.

The feature extraction module

is applied over (with shared weights) to extract visual features . To deal with the content discrepancy caused by camera motion and dynamic objects, the Motion Attention modules and compare and detect the differences between the extracted features , against , and estimate the feature attention maps: and . Any content misalignment in , with respect to will be suppressed by these attention maps. As illustrated in Fig. 1(a), the Motion-suppressed Reference Exposure Features can be formed and used as input to the MEF module by concatenating the features along the channel dimension:

(4)

Here indicates point-wise multiplication.

The subsequent MEF module comprehensively explores the tonal profiles and signal correlations within . Specifically, a sequential concatenation of Channel Attention Blocks (CAB) [37] has been deployed to explore the channel-wise feature correlations. This helps to fully explore the characteristic information from different captures and regularize the signal distribution to the desired subspace. The MEF module determines the tonal mapping profile, suppresses noise, and enhances image details (contrast, sharpness, etc.).

Iii-B Progressive Neural Feature Transfer over Masked Saturated HDR Domain

Shorter exposures in the capture sequence reveal possible missing information in longer ones. The NFT module aims to transfer these missing information to the medium exposed image with accurate alignment against adversarial conditions such as camera motion and dynamic contents. The alignment process is challenging due to insufficient contextual clues, especially for larger saturation areas. Neural features provide powerful descriptions of signal correlations across multiple scales and imaging conditions, which prove to be efficient in cross-reference correspondence matching [38]. We propose a multi-scale Neural Texture Matching (NTM) mechanism to search for content correspondence in the Masked Saturated HDR (MS-HDR) domain.

Fig. 3: Multi-scale Progressive Neural Texture Matching (NTM) based on VGG features.

Iii-B1 Masked Domain Transform

In order to promote signal similarity for efficient correspondence matching against saturation and motion (both camera and content motion), we transform the short-exposed HDR image into the artificial MS-HDR domain according to:

Here is the saturation energy level for , normalizes the saturation energy level from to , making the well-exposed pixels in artificially saturated similar to those in . It is assumed that after the transform, the saturated regions in and will be identical, irrespective of foreground or camera motion; such assumption only fails for background111We refer to any region throughout the capture sequence as background, as long as occlusion is present. pixels with saturation. As illustrated in Fig. 2, the histograms for and become almost identical. The transform increases similarities between different exposures by actively masking out saturated textures; and the saturation ambiguity is expected to be resolve by associating surrounding textures.

Iii-B2 Progressive Neural Texture Matching

Based on and , we match the correspondence within a multi-scale neural feature pyramid. The rationale for using a multi-scale framework is to involve contextual information outside of the saturated regions to anchor correspondence from a more global perspective. Same-sized patches at different scale levels will cover different content areas, which provides more comprehensive clues for robust feature matching.

As illustrated in Fig. 3, we denote the VGG feature extractor as , which extracts multi-scale (scale indicated by the subscript ) features from . We use to denote sampling the -th spatial patch from the VGG feature maps. Inner product is used to measure the feature similarity between the -th MS-HDR patch and the -th HDR patch :

(5)

The similarity map computation can be efficiently implemented as convolution over with as the convolution kernel:

(6)

where denotes the convolution operation.

In order to promote cross-scale feature matching consistency and to reduce computation complexity, we adopt a progressive feature matching mechanism which restricts the calculation of the similarity map to a local window. As illustrated in Fig. 3, the progressive matching starts at the coarsest scale . We use to denote a local similarity map within at a neighborhood centered around pixel . A best matched location within can be found for the target patch from via:

(7)

For the next finer scale , features will be matched within the local window centered around the pixel , which is directly propagated from the lower level location . A best match can be found via:

(8)

Similarly, a best match can be found for the finest scale . In the end, a tuple of best match locations at different VGG feature scales will be estimated for the corresponding target patch locations .

Fig. 4: Structural details for the feature encoder/decoder modules and . The encoded features are from the NFT module, and the the decoded features will be fused with features in the MEF module. The structure detail for the Channel Attention Block (CAB) is given in Fig. 1.

Iii-B3 VGG-Guided Neural Feature Transfer

The NFT module swaps the feature maps extracted by the Encoder to compensate for the missing contents caused by saturation. As illustrated in Fig. 1(b), the input to the feature encoder are the Motion-Suppressed Saturation Clue Features which is formed by:

(9)

where Sgm denotes the Sigmoid function. is the saturation attention predicted by the module based on the binary saturation mask of in the MS-HDR domain. helps to differentiate saturation from motion, and thus it encourages useful texture information to be transferred to regions with missing contents.

The structure details for is shown in Fig. 4. Similar to VGG, extracts visual features at three different scales, and each scale consists of two consecutive CAB blocks and a bilinear downsampler that reduces the feature spatial resolution by two.

Note that while the VGG features and are used for correspondence establishment as specified in Eq. (5) to (8), the learned encoder features and by are used for actual feature transfer. Based on the matching relationships , the patch will be replaced by the corresponding features . These swapped patched are finally formed into the texture-swapped encoder features .

Remark. Using VGG features and as matching guide proves to be important for identifying discriminative clues for robust matching against ambiguities caused by saturation. However, by actually swapping learned features and , the network has a more consistent gradient flow for efficient feature learning and texture fusion. This will be validated in the ablation study in Sec. IV-B.

Iii-C Progressive Texture Blending

The decoder module takes the texture-swapped encoder features as input and outputs the decoder features . The structure of the decoder is illustrated in Fig. 4. It has similar structure with with skip connections from the encoder at each scale.

To efficiently blend the decoder features with the main MEF stream features in a tonal- and contextual-consistent manner, we introduce a progressive blending scheme where consistency is enforced via the Scale Attention modules between different decoder scales. is made up of several fully convolutional layers with a Sigmoid layer in the end, aiming at enforcing cross-scale consistency between different features scales. For the scale , the scale attention map will be estimated via:

(10)

where the superscript operator denotes scaling up the spatial resolution of by -times via transposed convolution, and denotes the model parameters to be learned. Similarly for the scale , will be estimated via:

(11)

For the coarsest scale , the Scale Attention map is directly set as the medium exposed image’s saturation attention , which is predicted by the Saturation Attention module based on the binary saturation map from the medium exposure :

(12)

The predicted attention maps will be multiplied with the features from corresponding scales and fused with the main fusion branch:

(13)

The final output of the APNT-Fuse model is calculated as residual and compensated to the medium capture modulated with weights :

(14)

Here, denotes the Fusion Re-weighting module, which also consists of several fully convolutional layers with a Sigmoid layer in the end. The final fusion weights between and therefore depends on the Saturation Attention . Note that the fusion weights are no longer binary but an optimized fusion ratio.

Iii-D Training Loss

We focus on the visual quality of the fused HDR images after tone-mapping, therefore, we choose to train the network in the tone-mapped domain rather than the linear HDR domain. Given an HDR image in linear HDR domain, we compress the range of the image using the -law [13]:

(15)

where is a parameter defining the amount of compression, and denotes the tone-mapped image. In this work, we always keep in the range by adding a Sigmoid layer at the end of the model, with set to 5000. The tone-mapper in Eq. (15) is differentiable, which is most suitable for training the network.

We train the network by minimizing -norm based distance between the tone-mapped estimation and the ground truth HDR images :

(16)

Iii-E Implementation Details

We adopt a pre-trained VGG19 [30] for feature extraction, which is well-known for its efficiency of texture representation [8] [7]. Feature layers relu1_1, relu2_1, and relu3_1 are used as the texture encoder. Adam optimizer [1] was used for training, with batch size set to 1 and learning rate set to . We crop image into patches of size 256256 for training. Network weights are initialized using Xavier method [9].

Fig. 5: Visual Comparison for HDR restoration over dynamic scenes between Kalantari17’ [13], Yan19’ [34], and the proposed APNT-Fusion framework.
Fig. 6: Visual Comparison for data from the MEF-Opt database between different methods: (a) Mertens09’ [23], (b) Gu12’ [11], (c) Shen14’ [29], (d) Li21’ [16], and (e) our proposed APNT-Fusion framework.
Fig. 7: Comparison between Kalantari17’ [13], Yan19’ [34] and APNT-Fusion on degradation of HDR restoration performance when translation (by pixels) is applied between the input LDR images.
PSNR PSNR SSIM- SSIM-L
Wu et al. [33] 41.65 40.88 0.9860 0.9858
Kalantari et al. [13] 42.67 41.22 0.9877 0.9845
Yan et al. [34] 43.61 41.13 0.9922 0.9896
APNT-Fusion 43.96 41.69 0.9957 0.9914
TABLE I: Quantitative comparison of our proposed system against several state-of-the-art methods. The notation - and -L refer to the PSNR (in dB)/SSIM values calculated in the image tone-mapped (using Eq. (15)) and linear domains, respectively. The best and the second best results are highlighted in red and blue, respectively.

Iv Experiments

We comprehensively evaluate our model both quantitatively and qualitatively based on the benchmark HDR image datasets and compare our proposed method with state-of-the-art HDR restoration methods. The functionality of each component will be evaluated for their respective contributions in the ablation studies.

Iv-a Model Evaluation and Comparison

Iv-A1 The DeepHDR Dataset

Proposed by Kalantari et al. [13], the DeepHDR dataset includes bracketed exposures with dynamic contents: 74 groups of which are for training (each group contains 3 different exposures) and 15 groups for testing. All images are with resolution 10001500 pixels. We conduct evaluations on different methods based on the following four metrics: PSNR

-L: the Peak Signal-to-Noise Ratio between the ground truth HDR image

and the direct output from the network in the linear HDR domain:

(17)

where MSE represents Mean-Squared-Error for all pixels between and . PSNR-: the PSNR value between the ground truth and calculated HDR images in the tone-mapped domain based on the -law in Eq. (15). SSIM-L and SSIM-: the Structural Similarity Index [32] between the ground truths and calculated HDR images in the linear HDR and tone-mapped HDR domains, respectively.

The quantitative results for the proposed APNT-Fusion are shown in TABLE I, which has achieved an average PSNR of 43.96 calculated across the RGB channels for all 15 testing images in the tone-mapped domain. We also compare with state-of-the-art methods, i.e., the 2-stage flow-based method [13] (denoted as Kalantari17’), the deep and fully convolutional restoration framework [33] (denoted as Wu18’), and the attention guided HDR framework [34] (denoted as Yan19’). The results are shown in Table I. As can be seen, our model APNT-Fusion achieves the highest PSNR and SSIM values in both linear and tone-mapped domains. Although the quantitative advantage of APNT-Fusion is around 0.35 dB against Yan19’ in PSNR-, we believe it is caused by the relatively small saturation area in the testing dataset. In the following visual comparison, we show that the advantage of APNT-Fusion is very obvious against all other methods.

We carry out qualitative comparison, and the results are shown in Fig. 5. We focus on the most challenging areas, i.e., the restoration of saturated regions, and the details are highlighted in the zoom-in boxes. As can be seen, outputs from Kalantari17’ introduce undesirable artifacts in ambiguous motion regions. The introduction of advanced optical flow regularization solves the ambiguity issues associated with texture-less saturated regions to some extent, but large areas of saturated pixels remain in the output image as shown in Fig. 5(b) and (d). In addition, obvious distortions can be observed in Fig. 5(c)-(f). With the VGG-guided matching mechanism, APNT-Fusion estimates correspondence more accurately for ambiguous regions, especially the challenging areas for large saturated regions.

The attention-based network Yan19’ handles motion boundaries much better than Kalantari17’. Nevertheless, the disadvantage is also obvious. While the attention mask suppresses pixel discrepancies between different reference images which reduces ghosting artifacts, it also suppresses the transfer of useful information to the saturated pixels. Such artifacts are especially obvious over the silhouette between the building and the bright sky in Fig. 5(a), (b) and (c). The boundaries are faded since no mechanism has been designed to distinguish between saturation and motion. Due to the introduction of motion attention and the multi-scale progressive fusion mechanism, APNT-Fusion show much better restoration results in texture transfer to saturated regions and in the preservation of well-exposed structures.

Robustness against camera motion. We carry out experiments on the robustness of each method when camera motion is artificially imposed. We impose horizontal and vertical translation of pixels for and with respect to over all sequences of the testing dataset. The HDR restoration results are shown in Fig. 7 with PSNR- values being the vertical axis and values being the horizontal axis. As can be seen, the performance for Yan19’ deteriorates quickly as no pixel association mechanisms are employed in their framework. The misalignment causes the motion attention to suppress most of the valuable information, leading to a fast performance decay. Although optical flow was employed for Kalantari17’, we can still observe a sharper performance decline as becomes larger compared with APNT-Fusion. This experiment validates APNT-Fusion’s robustness against motion due to the multi-scale neural feature matching mechanism.

Fig. 8: Visual Comparison for dynamic contents between (a) Li21’ [16] and (b) the proposed APNT-Fusion.
PSNR SSIM-
w/o MS-HDR 43.11 (-0.85) 0.9869 (-0.0088)
w/o NFT 42.37 (-1.59) 0.9831 (-0.0126)
w/o VGG- 43.68 (-0.28) 0.9929 (-0.0028)
w/o VGG w 43.57 (-0.39) 0.9927 (-0.0030)
w/o Motion Att. 41.73 (-2.23) 0.9814 (-0.0143)
w/o Scale Att. 43.35 (-0.61) 0.9934 (-0.0023)
APNT-Fusion 43.96 0.9957
TABLE II: Quantitative ablation study of our proposed APNT-Fusion model against several variant networks. Most significant factors highlighted in red and blue, respectively.

Iv-A2 The MEF-Opt Database

Proposed by Ma et al. [20], the MEF-Opt database contains 32 sets of multi-exposure image sequences, most of which are static scenes without well-exposed HDR ground truths for direct quantitative evaluation. For visual comparison, the results for different static multi-exposure fusion methods are shown in Fig. 6. As can be seen, the proposed APNT-Fusion framework generally produces better fusion outcomes against the other state-of-the-art HDR fusion methods. Our results are with clearer boundaries between bright and dark regions. The halo effect has been much better suppressed due to the deep regularization of the MEF module. The textures have been well fused to the over-exposed regions, as highlighted in the zoom-in boxes.

We show the visual comparison for scenarios with dynamic objects between APNT-Fusion and the method proposed by Li et al. [16] (denoted as Li21’) in Fig. 8. As can be seen, APNT-Fusion still consistently shows advantages in suppressing halo effects and restoring details over regions with both saturation and motion.

Note that the MEF-Opt dataset was built for multi-exposure image fusion in the image domain (with 8-bit unsigned integers as data format), and there are no ground truth HDR images in the radiance domain available for direct quantitative evaluation. Although metrics such as the MEF-SSIM score [20] has been proposed as quantitative measures on the fusion quality, it is unfair to use such metrics to compare with methods that work in different domains, as the domain shift affects the MEF-SSIM scores significantly without truthfully reflecting the reconstruction visual quality. Therefore, we do not compare MEF-SSIM in this experiment.

Iv-B Ablation Study

To comprehensively evaluate separate modules of our framework, we carried out the following ablation studies. Specifically, we are going to independently evaluate the contributions of the texture transfer module and the attention fusion modules. Note that all the networks have been independently trained from scratch with the same training data and training settings as the complete APNT-Fuse model. The results shown in TABLE II are testing outcomes based on the 15 testing images from the DeepHDR [13] dataset.

Iv-B1 Contribution of the Neural Feature Transfer Module

To evaluate the contribution of the NFT module, the following two derivatives of networks are designed for performance analysis:

  • w/o MS-HDR: The VGG features of are no longer matched with in the MS-HDR domain, but instead, directly matched with . As shown in TABLE II, the transformation of from the HDR domain to the MS-HDR domain during VGG correspondence matching brings performance advantage of around 0.83 dB. This proves our claim that more accurate matching can be achieved in the MS-HDR domain.

  • w/o NFT: No neural feature transfer is implemented. The encoded features of are directly used for Progressive Texture Blending. When the entire neural transfer module is removed, the performance of the network HDR w/o NT dropped by 1.59 dB, which signifies the contribution of the Neural Texture Transfer Network.

Fig. 9: Visual Comparison for ablation study over the full model, w/o scale attention, and w/o motion attention. Results from Kalantari17’ [13], and Li21’ [16] are also shown for comparison.

Iv-B2 Contribution of the VGG Feature Matching Module

In our framework, we have adopted the mechanism to search for neural feature correspondence based on multi-scale VGG neural features. Based on the matching outcomes, actual neural features are swapped in the encoded feature space subsequently. We carry out ablation study for this mechanism based on the following two setups:

  • w/o VGG-: in this setup, we test only using the original scale of the VGG features relu1_1 for correspondence matching. The results in Table II shows that a 0.28 dB advantage is achieved by the multi-scale mechanism. Such a multi-scale scheme is useful in resolving ambiguities for larger saturated regions.

  • w/o VGG w : in this setup, we test not using VGG features for matching, but instead, we directly rely on learned encoder features in for both feature matching and swapping. As can be seen, a performance drop by 0.39 dB is observed, which supports our claim that VGG features provides more discriminative clues for establishing correspondence against ambiguities of various sorts.

Iv-B3 Contribution of the Attention Fusion Network

  • w/o Motion Att.: in this setup, we test the contribution of the motion attention module by setting all elements in the motion attention maps and to be 1. This means the features from all exposures are directly concatenated as and fed to the MEF module for fusion. As can be seen, a drop of around 2.23 dB is observed when the Motion Attention module is absent. This consolidates the contribution of such mechanism in preventing ghosting artifacts.

  • w/o Scale Att.: We test the contribution of scale attention module by setting all elements in the scale attention maps and to be 1. Note that is still set to be equal to , which contains no cross-scale information. In TABLE II, a drop of 0.61 dB is observed when the scale attention module is removed. This validates the effectiveness of this module for preserving consistency when progressively blend transferred textures to the multi-exposure fusion stream.

Visual comparisons for the ablation studies on the attention modules are shown in Fig. 9. As can be seen, without the scale attention modules, larger saturated regions show inconsistent texture fusion; however, with cross-scale consistency enforced, the texture transfer is much more reliable. In addition, we can also see that without motion attention modules, content misalignment caused obvious distortions after exposure fusion. We have also put Kalantari17’ [13] and Li21’ [16] in Fig. 9 for visual comparison. It’s worth mentioning that obvious color distortions can be observed from the results of Li21’; this is because the bright region has been misclassified as motion area, over which histogram equalization is applied to restore contrast, causing the unpleasant artifacts.

Through the ablation studies, we have validated the important roles the novel modules play in the AFNT-Fusion framework.

V Conclusion

In this work, we have proposed an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration framework, which addresses the issues of motion-induced ghosting artifacts prevention and texture transfer over saturated regions efficiently within the same framework. A multi-scale Neural Feature Transfer module has been proposed to search for content correspondence via masked saturated transform, which actively masks out saturated textures and associates surrounding textures to resolve ambiguity. Transferred neural features are then combined to predict the missing contents of saturated regions in a multi-scale progressive manner with novel attention mechanisms to enforce cross-scale tonal and texture consistency. Both qualitative and quantitative evaluations validate the advantage of our method against the state-of-the-art solutions.

References

  • [1] J. Ba (2014) Adam: a method for stochastic optimization. Computer Science. Cited by: §III-E.
  • [2] J. Chen, J. Hou, and L. Chau (2018-Sep.) Light field denoising via anisotropic parallax analysis in a cnn framework. IEEE Signal Processing Letters 25 (9), pp. 1403–1407. External Links: ISSN 1558-2361 Cited by: §II-C.
  • [3] Color CMOS 16-megapixel image sensor. Note: https://www.ovt.com/sensors/OV16885-4CAccessed: 2021-07-13 Cited by: §I.
  • [4] X. Deng, Y. Zhang, M. Xu, S. Gu, and Y. Duan (2021) Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Transactions on Image Processing 30, pp. 3098–3112. Cited by: §II-C.
  • [5] G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger (2017-11) HDR image reconstruction from a single exposure using deep CNNs. ACM Transactions on Graphics 36 (6). Cited by: §II-C.
  • [6] Y. Endo, Y. Kanamori, and J. Mitani (2017-11) Deep reverse tone mapping. ACM Transactions on Graphics 36 (6), pp. 177–1. Cited by: §II-C.
  • [7] L. A. Gatys, A. S. Ecker, and M. Bethge (2016)

    Image style transfer using convolutional neural networks

    .
    In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 2414–2423. Cited by: §III-E.
  • [8] L. Gatys, A. S. Ecker, and M. Bethge (2015) Texture synthesis using convolutional neural networks. In Advances in neural information processing systems, pp. 262–270. Cited by: §III-E.
  • [9] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the International Conference on Artificial Intelligence and Statistics

    ,
    pp. 249–256. Cited by: §III-E.
  • [10] M. Granados, K. I. Kim, J. Tompkin, and C. Theobalt (2013-11) Automatic noise modeling for ghost-free HDR reconstruction. ACM Transactions on Graphics 32 (6). External Links: ISSN 0730-0301 Cited by: §II-A.
  • [11] B. Gu, W. Li, J. Wong, M. Zhu, and M. Wang (2012) Gradient field multi-exposure images fusion for high dynamic range image visualization. Journal of Visual Communication and Image Representation 23 (4), pp. 604–610. Cited by: Fig. 6.
  • [12] T. Jinno and M. Okuda (2008-10) Motion blur free HDR image acquisition using multiple exposures. In Proceedings of IEEE International Conference on Image Processing, Vol. , pp. 1304–1307. External Links: ISSN 2381-8549 Cited by: §II-B.
  • [13] N. K. Kalantari and R. Ramamoorthi (2017-07) Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics 36 (4), pp. 144–1. Cited by: §I, §II-C, §II, Fig. 5, Fig. 7, §III-D, TABLE I, Fig. 9, §IV-A1, §IV-A1, §IV-B3, §IV-B.
  • [14] S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski (2003-07) High dynamic range video. ACM Transactions on Graphics 22 (3), pp. 319–325. External Links: ISSN 0730-0301 Cited by: §II-B.
  • [15] C. Lee, Y. Li, and V. Monga (2014-Sep.) Ghost-free high dynamic range imaging via rank minimization. IEEE Signal Processing Letters 21 (9), pp. 1045–1049. External Links: ISSN 1558-2361 Cited by: §II-A.
  • [16] H. Li, T. N. Chan, X. Qi, and W. Xie (2021) Detail-preserving multi-exposure fusion with edge-preserving structural patch decomposition. IEEE Transactions on Circuits and Systems for Video Technology (), pp. 1–1. Cited by: §II-A, Fig. 6, Fig. 8, Fig. 9, §IV-A2, §IV-B3.
  • [17] H. Li, X. Jia, and L. Zhang (2018) Clustering based content and color adaptive tone mapping. Computer Vision and Image Understanding 168, pp. 37–49. Cited by: §I.
  • [18] H. Li, K. Ma, H. Yong, and L. Zhang (2020) Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Transactions on Image Processing 29, pp. 5805–5816. Cited by: §II-A.
  • [19] B. D. Lucas and T. Kanade (1981-08) An iterative image registration technique with an application to stereo vision. In Proceedings of International Joint Conference on Artificial Intelligence, pp. 674–679. Cited by: §II-B.
  • [20] K. Ma, Z. Duanmu, H. Yeganeh, and Z. Wang (2017) Multi-exposure image fusion by optimizing a structural similarity index. IEEE Transactions on Computational Imaging 4 (1), pp. 60–72. Cited by: §IV-A2, §IV-A2.
  • [21] K. Ma, Z. Duanmu, H. Zhu, Y. Fang, and Z. Wang (2019) Deep guided learning for fast multi-exposure image fusion. IEEE Transactions on Image Processing 29, pp. 2808–2819. Cited by: §I.
  • [22] K. Ma, H. Li, H. Yong, Z. Wang, D. Meng, and L. Zhang (2017) Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Transactions on Image Processing 26 (5), pp. 2519–2532. Cited by: §II-A.
  • [23] T. Mertens, J. Kautz, and F. Van Reeth (2009) Exposure fusion: a simple and practical alternative to high dynamic range photography. In Computer graphics forum, Vol. 28, pp. 161–171. Cited by: §I, Fig. 6.
  • [24] S. K. Nayar and T. Mitsunaga (2000) High dynamic range imaging: spatially varying pixel exposures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 472–479. Cited by: §I.
  • [25] T. Oh, J. Lee, Y. Tai, and I. S. Kweon (2015-06) Robust high dynamic range imaging by rank minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (6), pp. 1219–1232. External Links: ISSN 1939-3539 Cited by: §II-B.
  • [26] F. Pece and J. Kautz (2010-11) Bitmap movement detection: hdr for dynamic scenes. In Proceedings of Conference on Visual Media Production, Vol. , pp. 1–8. External Links: ISSN null Cited by: §II-A.
  • [27] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman (2012) Robust patch-based HDR reconstruction of dynamic scenes.. ACM Transactions on Graphics 31 (6), pp. 203–1. Cited by: §III-A.
  • [28] A. Serrano, F. Heide, D. Gutierrez, G. Wetzstein, and B. Masia (2016) Convolutional sparse coding for high dynamic range imaging. In Proceedings of the Computer Graphics Forum, Vol. 35, pp. 153–163. Cited by: §I.
  • [29] J. Shen, Y. Zhao, S. Yan, X. Li, et al. (2014) Exposure fusion using boosting laplacian pyramid.. IEEE Transactions on Cybernetics 44 (9), pp. 1579–1590. Cited by: Fig. 6.
  • [30] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, Cited by: §III-E.
  • [31] J. Tumblin, A. Agrawal, and R. Raskar (2005) Why i want a gradient camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 103–110. Cited by: §I.
  • [32] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §IV-A1.
  • [33] S. Wu, J. Xu, Y. Tai, and C. Tang (2018-09) Deep high dynamic range imaging with large foreground motions. In Proceedings of the European Conference on Computer Vision, pp. 117–132. Cited by: §II-C, TABLE I, §IV-A1.
  • [34] Q. Yan, D. Gong, Q. Shi, A. V. D. Hengel, C. Shen, I. Reid, and Y. Zhang (2019-06) Attention-guided network for ghost-free high dynamic range imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1751–1760. Cited by: §I, §II-C, Fig. 5, Fig. 7, TABLE I, §IV-A1.
  • [35] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: §II-C.
  • [36] W. Zhang and W. Cham (2012-04) Gradient-directed multiexposure composition. IEEE Transactions on Image Processing 21 (4), pp. 2318–2323. External Links: ISSN 1941-0042 Cited by: §II-A.
  • [37] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In Proc. of the European Conference on Computer Vision, pp. 286–301. Cited by: §III-A.
  • [38] Z. Zhang, Z. Wang, Z. Lin, and H. Qi (2019-06) Image super-resolution by neural texture transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. . Cited by: §III-B.