Pyramid Attention Networks for Image Restoration

04/28/2020 ∙ by Yiqun Mei, et al. ∙ 0

Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to borrow clean signals from their "clean" correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 10

page 11

page 12

page 14

page 15

Code Repositories

Pyramid-Attention-Networks

PyTorch code for our paper "Pyramid Attention for Image Restoration" with new SOTA results on multiple tasks


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image restoration algorithms aim to recover a high-quality image from the contaminated counterpart, and is viewed as an ill-posed problem due to the irreversible degradation processes. They have many applications depending on the type of corruptions, for example, image denoising [42, 47, 26], demosaicing [43, 47], compression artifacts reduction [11, 7, 42], super resolution [21, 22, 36] and many others [23, 18, 6]. To restore missing information in the contaminated image, a variety of approaches based on leveraging image priors have been proposed [3, 50, 33, 51].

Among these approaches, the prior of self-similarity in an image is widely explored and proven to be important. For example, non-local mean filtering [3] uses self-similarity prior to reduce corruptions, which averages similar patches within the image. This notion of non-local pattern repetition was then extended to across multiple scales and demonstrated to be a strong property for natural images [49, 16]. Several self-similarity based approaches [16, 14, 35] were first proposed for image super resolution, where they restore image details by borrowing high-frequency details from self recurrences at larger scales. The idea was then explored in other restoration tasks. For example, in image denoising, its power is further strengthened by observing that noise reduces drastically at coarser scales [50]. This motivates many advanced approaches [50, 31] to restore clean signals by finding “noisy-free” recurrences in a built image-space pyramid, yielding high-quality reconstructions. The idea of utilizing multi-scale non-local prior has achieved great successes in various restoration tasks [2, 50, 31, 27].

Recently deep neural networks trained for image restoration have made unprecedented progress. Following the importance of self-similarity prior, most recent approaches based on neural networks [47, 26] adapt non-local operations into their networks, following the non-local neural networks [39]. In a non-local block, a response is calculated as a weighted sum over all pixel-wise features on the feature map, thus it can obtain long-range information. Such a module was initially designed for high-level recognition tasks and proven to be also effective in low-level vision problems [47, 26].

However, these approaches which adapt the naive self-attention module to low-level tasks have certain limitations. First, to our best knowledge, multi-scale non-local prior is never explored. It has been demonstrated in the literature that cross-scale self-similarity can bring impressive benefits for image restoration [50, 2, 31, 16]. Unlike high-level semantic features for recognition which makes not too much difference across scales, low-level features represent richer details, patterns, and textures at different scales. Nevertheless, the leading non-local self-attention fails to capture the useful correspondences that occur at different scales. Second, pixel-wise matching used in the self-attention module is usually noisy for low-level vision tasks, thus reducing performance. Intuitively, enlarging the searching space raises possibility for finding better matches, but it is not true for the existing self-attention modules [26]. Unlike high-level feature maps where numerous dimension reduction operations are employed, image restoration networks often maintain the input spatial size. Therefore, feature is only highly relevant to a localized region, making them easily affected by noisy signals. This is in line with conventional non-local filtering, where pixel-wise matching performs much worse than block matching [4].

Figure 1: Visualization of most correlated patches captured by our pyramid attention. Pyramid attention exploits multi-scale self-exemplars to improve reconstruction

In this paper, we present a novel non-local pyramid attention as a simple and generic building block for exhaustively capturing long-range dependencies, as shown in Fig. 1. The proposed attention takes full advantages of traditional non-local operations but is designed to better accord with the nature of image restoration. Specifically, the original search space is largely extended from a single feature map to a multi-scale feature pyramid. The proposed operation exhaustively evaluates correlation among features across multiple specified scales by searching over the entire pyramid. This brings several advantages: (1) It generalizes existing non-local operation, where the original searching space is inherently covered in the lowest pyramid level. (2) The long-range dependency between relevant features of different sizes is explicitly modeled. Since the operation is fully differentiable, it can be jointly optimized with networks through back propagation. (3) Similar to traditional approaches [50, 2, 31]

, one may expect noisy signals in features can be drastically reduced via rescaling to coarser pyramid level via operations like bi-cubic interpolation. This allows the network to find “clean signal” from multi-scale correspondences. Next, we enhance the robustness of correlation measurement by involving neighboring features into computation, inspired by traditional block matching strategy. Region-to-region matching imposes additional similarity constraints on the neighborhood. As such, the module can effectively single out highly relevant correspondences while suppressing noisy ones.

We demonstrate the power of non-local pyramid attention on various image restoration tasks: image denoising, image demosaicing, compression artifacts reduction and image super resolution. In all tasks, a single pyramid attention, which is our basic unit, can model long-range dependency without scale restriction, in a feed forward manner. With one attention block inserted into a very simple backbone network, the model achieves significantly better results than the latest state-of-the-art approach with well-engineered architecture and multiple non-local attention units. In addition, we also conduct extensive ablation studies to analyze our design choices. All these evidences demonstrate our module is a better alternative of current non-local operation and can be used as a fundamental unit in neural networks for generic image restoration.

2 Related Works

Self-similarity Prior for Image Restoration. Self-similarity property that small patterns tend to recur within a image powers natural images with strong self-predictive ability [2, 16, 49], which forms a basis for many classical image restoration methods [49, 50, 2, 31, 20]. The initial work, non-local mean filtering [3], globally averages similar patches for image denoising. Later on, Dabov et al [9] introduced BM3D, where repetitive patterns are grouped into 3D arrays to be jointly processed by collaborative filters. In LSSC [28], self-similarity property is combined with sparse dictionary learning for both denoising and demosaicing. This “fractal like” characteristic was further strengthened to across different scales and shown to be a very strong property for natural images [16, 49]. To enjoy cross-scale redundancy, self-similarity based approaches were proposed for image super resolution [16, 14, 20], where high frequency information is retrieved uniquely from internal multi-scale recurrences. Observing that corruptions drop drastically at coarser scales, Zontak [50] demonstrated that a clean version of noisy patches (99%) exists at coarser level of the original image. This idea was developed into their denoising algorithm, which achieved promising results. The cross-scale self similarity is also of central importance for many image deblurring [31, 1] and image dehazing approaches [2].

Non-local Attention.Non-local attention in deep CNNs was initially proposed by Wang et al [39] for video classification. In their networks, non-local units are placed on high-level, sub-sampled feature maps to compute long-range semantic correlations. By assigning weights to features at all locations, it allows the network to focus on more informative areas. Adapting non-local operation also showed considerable improvements in other high-level tasks, such as object detection [5], semantic segmentation [15] and person Re-id [41]. For image restoration, recent approaches, such as NLRN [26], RNAN [47] and SAN [10], incorporate non-local operations in their networks. However, without careful modification, their performance are reduced by involving many ill matches during the pixel-wise feature matching in attention units.

Deep CNNs for Image Restoration. Adopting deep-CNNs for image restoration has shown evident improvements by embracing their representative power. In the early work, Vincent et al [38] proposed to use stacked auto-encoder for image denoising. Later, ARCNN was introduced by Dong et al [11] for compression artifacts reduction. Zhang et al [42]

proposed DnCNN for image denosing, which uses advanced techniques like residual learning and batch normalization to boost performance. In IRCNN

[43], a learned set of CNNs are used as denoising prior for other image restoration tasks. For image super resolution, extensive efforts have been spent into designing advanced architectures and learning methods, such as progressive super resolution [22], residual [25] and dense connection [48], back-projection [17], scale-invariant convolution [12] and channel attention [46]. Recently, most state-of-the-art approaches [26, 47, 10] incorporate non-local attention into networks to further boost representation ability. Although extensive efforts have been made in architectural engineering, existing methods relying on convolution and non-local operation can only exploit information at a same scale.

3 Pyramid Attention Networks

Both convolution operation and non-local attention are restricted to same-scale information. In this section, we introduce the novel pyramid attention, which can deal with non-local dependency across multiple scales, as a generalization of non-local operations.

3.1 Formal Definition

Non-local attention calculates a response by averaging features over an entire image, as shown in Fig. 2 (a). Formally, given an input feature map , this operation is defined as:

(1)

where , are index on the input and output respectively. The function computes pair-wise affinity between two input features. is a feature transformation function that generates a new representation of . The output response obtains information from all features by explicitly summing over all positions and is normalized by a scalar function . While the above operation manages to capture long-range correlation, information is extracted at a single scale. As a result, it fails to exploit relationships to many more informative areas of distinctive spatial sizes.

To break this scale constraint, we propose pyramid attention (Fig. 2 (c)), which captures correlations across scales. In pyramid attention, affinities are computed between a target feature and regions. Therefore, a response feature is a weighted sum over multi-scale correspondences within the input map. Formally, given a series of scale factor , pyramid attention can be expressed as

(2)

Here represents a neighborhood centred at index on input .

In other words, pyramid attention behaves in a non-local multi-scale way by explicitly processing larger regions with sizes specified by scale pyramid at all position . Note that when only a single scale factor is specified, the proposed attention degrades to current non-local operation. Hence, our approach is a more generic operation that allows the network to fully enjoy the predictive power of natural images.

Finding a generic solution, which models cross-scale relationships, is a non-trivial problem and requires carefully engineering. In the following section, we first address the non-local operation between two scales and then extend it to pyramid scales.

(a) Non-local attention (b) Scale agnostic attention (c) Pyramid attention
Figure 2: Comparison of attentions. (a) classic self-attention computes pair-wise feature correlation at same scale. (b) Scale agnostic attention augments (a) to capture correspondences at one additional scale. (c) Pyramid attention generalizes (a) and (b) by modeling multi-scale non-local dependency

3.2 Scale Agnostic Attention

Given an extra scale factor , how to evaluate the correlation between and and aggregate information from to form are two key steps. Here, the major difficulty comes from misalignment in their spatial dimensions. Common similarity measurements, such as dot product and embedded Gaussian, only accept features with identical dimensions, thus are infeasible in this case.

To mitigate the above problem, we propose to squeeze the spatial information of into a single region descriptor. This step is conducted by down-scaling the region in a pixel feature . As we need search over the entire feature map, we can therefore directly down-scale the original input (HW) to obtain a descriptor map (). The correlation between and is then represented by and the region descriptor . Formally, scale agnostic attention (Fig. 2 (b)) is formulated as

(3)

where .

This operation brings additional advantages. As discussed in Section 1, downscaling regions into coarser descriptors reduces noisy levels. On the other hand, since the cross-scale recurrence represents a similar content, the structure information will be still well-preserved after down-scaling. Combing these two facts, region descriptors can serve as a “cleaner version” of the target feature and a better alternative of noisy patch matches at the original scale.

3.3 Pyramid Attention

To make full use of self-predictive power, the scale agnostic attention can be extended to pyramid attention, which computes correlations across multiple scales. In such units, pixel-region correspondences are captured over an entire feature pyramid. Specifically, given a series of scales , it forms a feature pyramid , where () is a region descriptor map of the input , obtained by down-scaling operation. In such case, the correlations between any pyramid levels and the original input can be seen as a scale agnostic attention. Therefore, the pyramid attention is defined as:

(4)

The cross-scale modeling ability is due to the fact that region descriptor at different levels summarizes information over regions of various sizes. When they are copied back to original position , non-local multi-scale information is fused together to form a new response, which intuitively contains richer and more faithful information than the matches from a single scale.

3.4 Instantiation

Choices of , and . There are many well-explored choices for pair-wise function [39, 26], such as Gaussian, embedded Gaussian, dot pot and feature concatenation. In this paper, we use embedded Gaussian to follow previous best practices [26]: , where and .

For feature transformation function , we use a simple linear embedding: . Finally, we set . By specifying above instantiations, the term is equivalent to softmax over all possible positions in the pyramid.

Patch based region-to-region attention As discussed in Section 1, information contained in features (for image restoration tasks) is very localized. Consequently, the matching process is usually affected by noisy signals. Previous approach relieves this problem by restriction search space to local region [26]. However, this also prevents them from finding better correspondences that are far away from current position.

To improve the robustness during matching, we impose extra neighborhood similarity, which is in line with classical non-local filtering [3]. As such, the pyramid attention (eq. 3) is expressed as:

(5)

where the neighborhood is specified by . This add a stronger constraint on matching content that two features are highly correlated if and only if their neighborhood are highly similar as well. The block-wise matching allows the network to pay more attention on relevant areas while suppressing unrelated ones.

Implementation. The proposed pyramid attention is implemented using basic convolution and deconvolution operations, as shown in Fig. 3. In practice, softmax matching scores can be expressed as convolution over the input using patches extracted from the feature pyramid. To obtain a final response, we extract patches from the transformed feature map (by ) to conduct a deconvolution over the matching score. Note that the proposed operation is fully convolutional, differential and accept any input resolutions, which can be flexibly embedded into many standard architectures.

Figure 3: PANet with the proposed pyramid attention (PA). Pyramid attention captures multi-scale correlation by consecutively computing Scale Agnostic (S-A) attention

3.5 PANet: Pyramid Attention Networks

To show the effectiveness of our pyramid attention, we choose a simple ResNet as our backbone without any architectural engineering. The proposed image restoration network is illustrated in Fig. 3. We remove batch normalization in each residual block, following the practice in [25]. Similar to many restoration networks, we add a global pathway from the first feature to the last one, which encourages the network bypass low frequency information. We inset a single pyramid attention in the middle of the network.

Given a set of N-paired training images -, we optimize the reconstruction loss between and ,

(6)

where represents the entire PANet and is a set of learnable parameters.

4 Experiments

4.1 Datasets and Evaluation Metrics

The proposed pyramid attention and PANet are evaluated on major image restoration tasks: image denoising, demosaicing and compression artifacts reduction. For fair comparison, we follow the setting specified by RNAN [47] for image denoising, demosaicing, and compression artifacts reduction. We use DIV2K [37] as our training set, which contains 800 high quality images. We report results on standard benchmarks using PSNR and/or SSIM [40].

4.2 Implementation Details

For pyramid attention, we set the scale factors , so that we construct a 5 level feature pyramid within the attention block. To build the pyramid, we use simple Bicubic interpolation to rescale feature maps. While computing correlations, we use small patches centered at target features. The proposed PANet contains 80 residual blocks with one pyramid attention module inserted after the 40- block. All features have 64 channels, except for those used in embedded Gaussian, where the channel number is reduced to 32.

During training, each mini-batch consists of 16 patches with size . We augment training images using vertical/horizontal flipping and random rotation of , , and . The model is optimized by Adam optimizer with , , and . The learning rate is initialized to

and reduced to a half after every 200 epochs. Our model is implemented using PyTorch 

[32] and trained on Nvidia TITANX GPUs.

Method Kodak24 BSD68 Urban100
10 30 50 70 10 30 50 70 10 30 50 70
CBM3D 36.57 30.89 28.63 27.27 35.91 29.73 27.38 26.00 36.00 30.36 27.94 26.31
TNRD 34.33 28.83 27.17 24.94 33.36 27.64 25.96 23.83 33.60 27.40 25.52 22.63
RED 34.91 29.71 27.62 26.36 33.89 28.46 26.35 25.09 34.59 29.02 26.40 24.74
DnCNN 36.98 31.39 29.16 27.64 36.31 30.40 28.01 26.56 36.21 30.28 28.16 26.17
MemNet N/A 29.67 27.65 26.40 N/A 28.39 26.33 25.08 N/A 28.93 26.53 24.93
IRCNN 36.70 31.24 28.93 N/A 36.06 30.22 27.86 N/A 35.81 30.28 27.69 N/A
FFDNet 36.81 31.39 29.10 27.68 36.14 30.31 27.96 26.53 35.77 30.53 28.05 26.39
RNAN 37.24 31.86 29.58 28.16 36.43 30.63 28.27 26.83 36.59 31.50 29.08 27.45
PANet 37.24 31.88 29.57 28.14 36.50 30.70 28.33 26.89 36.80 31.87 29.47 27.87
Table 1: Quantitative evaluation of state-of-the-art approaches on color image denoising. Best results are highlighted
valign=t
BSD68: 119082
valign=t
HQ Noisy (=50) TNRD [7] RED [29] DnCNN [42]
MemNet [36] IRCNN [43] FFDNet [44] RNAN [47] PANet
valign=t
Urban100: img006
valign=t
HQ Noisy (=50) TNRD [7] RED [29] DnCNN [42]
MemNet [36] IRCNN [43] FFDNet [44] RNAN [47] PANet
valign=t
Urban100: img046
valign=t
HQ Noisy (=50) TNRD [7] RED [29] DnCNN [42]
MemNet [36] IRCNN [43] FFDNet [44] RNAN [47] PANet
Figure 4: Visual comparison for color image denoising with noise level = 50

4.3 Image Denoising

Following RNAN [47], PANet is evaluated on standard benchmarks for image denoising: Kodak24 (http://r0k.us/graphics/kodak/), BSD68 [30], and Urban100 [20]. We create noisy images by adding AWGN noises with . We compare our approach with 8 state-of-the-art methods: CBM3D [8], TNRD [7], RED [29], DnCNN [42], MemNet [36], IRCNN [43], FFDNet [44], and RNAN [47].

As shown in Table 1, PANet achieved best results on almost all datasets and noisy levels. In particular, our approach yielded better results than prior state-of-the-art RNAN, which has well-engineered network and multiple non-local attention blocks. These results show that, even with only one additional pyramid attention, a simple ResNet can significantly boost restoration quality. One may notice that PANet performs significantly well on Urban100 datasets, with more than 0.3 dB improvements over RNAN on all noisy levels. This is because pyramid attention allows the network to explicitly capture abundant cross-scale self-exemplars in urban scenes. In contrast, traditional non-local attention fails to explore those multi-scale relationships.

We further present qualitative evaluations on BSD68 and Urban100. The results are shown in Fig. 4. By relying on a single learned pyramid attention, PANet managed to produce the most faithful restoration results than others.

Method McMaster18 Kodak24 BSD68 Urban100
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
Mosaiced 9.17 0.1674 8.56 0.0682 8.43 0.0850 7.48 0.1195
IRCNN 37.47 0.9615 40.41 0.9807 39.96 0.9850 36.64 0.9743
RNAN 39.71 0.9725 43.09 0.9902 42.50 0.9929 39.75 0.9848
PANet 40.00 0.9737 43.09 0.9903 42.86 0.9933 40.50 0.9854
Table 2: Quantitative evaluation of state-of-the-art approaches on color image demosaicing. Best results are highlighted

4.4 Image Demosaicing

For image demosaicing, we conduct evaluations on Kodak24, McMaster [43], BSD68, and Urban100, following settings in RNAN [47]. We compare our approach with recent state-of-the-arts IRCNN [43] and RNAN [47]. As shown in Table 2, mosaic corruption significantly reduced image quality in terms of PSNR and SSIM. RNAN and IRCNN could remove these corruptions to some degree and lead to relatively high-quality restoration. Our approach yields the best reconstruction, demonstrating advantages of exploiting multi-scale correlations.

The visual results are shown in Fig. 5. While IRCNN and RNAN can effectively reduce mosaic corruption to some degree, their results still contain evident artifacts. By relying on pyramid attention, the proposed PANet removed most of blocking artifacts and restores the more accurate color, as compared to IRCNN and RNAN.

valign=t
Urban100: img_026
valign=t
HQ Mosaiced IRCNN [43] RNAN [47] PANet
Figure 5: Visual image demosaicing results
Dataset JPEG SA-DCT ARCNN TNRD DnCNN RNAN PANet
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
LIVE1 10 27.77 0.7905 28.65 0.8093 28.98 0.8217 29.15 0.8111 29.19 0.8123 29.63 0.8239 29.69 0.8250
20 30.07 0.8683 30.81 0.8781 31.29 0.8871 31.46 0.8769 31.59 0.8802 32.03 0.8877 32.10 0.8885
30 31.41 0.9000 32.08 0.9078 32.69 0.9166 32.84 0.9059 32.98 0.9090 33.45 0.9149 33.55 0.9157
40 32.35 0.9173 32.99 0.9240 33.63 0.9306 N/A N/A 33.96 0.9247 34.47 0.9299 34.55 0.9305
Classic5 10 27.82 0.7800 28.88 0.8071 29.04 0.8111 29.28 0.7992 29.40 0.8026 29.96 0.8178 30.03 0.8195
20 30.12 0.8541 30.92 0.8663 31.16 0.8694 31.47 0.8576 31.63 0.8610 32.11 0.8693 32.36 0.8712
30 31.48 0.8844 32.14 0.8914 32.52 0.8967 32.78 0.8837 32.91 0.8861 33.38 0.8924 33.53 0.8939
40 32.43 0.9011 33.00 0.9055 33.34 0.9101 N/A N/A 33.77 0.9003 34.27 0.9061 34.38 0.9068
Table 3: Quantitative evaluation of state-of-the-art approaches on compression artifacts reduction. Best results are highlighted

4.5 Image Compression Artifacts Reduction

For image compression artifacts reduction (CAR), we compare our method with 5 most recent approaches: SA-DCT [13], ARCNN [11], TNRD [7], DnCNN [42], and RNAN [47]. We present results on LIVE1 [34] and Classic5 [13], following the same settings in RNAN. To obtain the low-quality compressed images, we follow the standard JPEG compression process and use Matlab JPEG encoder with quality . For fair comparison, the results are only evaluated on Y channel in YCbCr Space.

The quantitative evaluation are reported in Table 3. By incorporating pyramid attention, PANet obtains best results on both LIVE1 and Classic5 with all quality levels. We further present visual comparisons on the most challenging quality level in Fig. 6. One can see that the proposed approach successfully reduced compression artifacts and recovered the most image details. This is mainly because our PANet captures non-local relationships in a multi-scale way, helping to reconstruct more faithful details.

valign=t
Buildings
valign=t
HQ JPEG ARCNN [11] DnCNN [42] RNAN [47] PANet
valign=t
Carnivaldolls
valign=t
HQ JPEG ARCNN [11] DnCNN [42] RNAN [47] PANet
Figure 6: Visual comparison for image CAR with JPEG quality = 10
Methods RED DnCNN MemNet RNAN PANet
Parameters 4131K 672K 677K 7409K 5957K
PSNR (dB) 26.40 28.16 26.53 29.15 29.47
Table 4: Model size comparison

4.6 Model Size Analyses

We report our model size and compare it with other advanced image denoising approaches in Table 4. One can see that PANet achieves the best performance with a lighter much simpler architecture, as compared to the prior state-of-the-art approach RNAN. Such observations indicate the great advantages brought by our pyramid attention module. In practice, our proposed pyramid attention module can be inserted in related networks.

Method Scale Set5 Set14 B100 Urban100 Manga109
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM


LapSRN [22]
2 37.52 0.9591 33.08 0.9130 31.08 0.8950 30.41 0.9101 37.27 0.9740
MemNet [36] 2 37.78 0.9597 33.28 0.9142 32.08 0.8978 31.31 0.9195 37.72 0.9740
SRMDNF [45] 2 37.79 0.9601 33.32 0.9159 32.05 0.8985 31.33 0.9204 38.07 0.9761
DBPN [17] 2 38.09 0.9600 33.85 0.9190 32.27 0.9000 32.55 0.9324 38.89 0.9775
RDN [48] 2 38.24 0.9614 34.01 0.9212 32.34 0.9017 32.89 0.9353 39.18 0.9780


RCAN [46]
2 38.27 0.9614 34.12 0.9216 32.41 0.9027 33.34 0.9384 39.44 0.9786
NLRN [26] 2 38.00 0.9603 33.46 0.9159 32.19 0.8992 31.81 0.9249
SRFBN [24] 2 38.11 0.9609 33.82 0.9196 32.29 0.9010 32.62 0.9328 39.08 0.9779
OISR [19] 2 38.21 0.9612 33.94 0.9206 32.36 0.9019 33.03 0.9365
SAN [10] 2 38.31 0.9620 34.07 0.9213 32.42 0.9028 33.10 0.9370 39.32 0.9792
EDSR [25] 2 38.11 0.9602 33.92 0.9195 32.32 0.9013 32.93 0.9351 39.10 0.9773
PA-EDSR (ours) 2 38.33 0.9617 34.22 0.9224 32.42 0.9027 33.38 0.9392 39.37 0.9782


LapSRN [22]
3 33.82 0.9227 29.87 0.8320 28.82 0.7980 27.07 0.8280 32.21 0.9350
MemNet [36] 3 34.09 0.9248 30.00 0.8350 28.96 0.8001 27.56 0.8376 32.51 0.9369
SRMDNF [45] 3 34.12 0.9254 30.04 0.8382 28.97 0.8025 27.57 0.8398 33.00 0.9403
RDN [48] 3 34.71 0.9296 30.57 0.8468 29.26 0.8093 28.80 0.8653 34.13 0.9484
RCAN [46] 3 34.74 0.9299 30.65 0.8482 29.32 0.8111 29.09 0.8702 34.44 0.9499
NLRN [26] 3 34.27 0.9266 30.16 0.8374 29.06 0.8026 27.93 0.8453 - -
SRFBN [24] 3 34.70 0.9292 30.51 0.8461 29.24 0.8084 28.73 0.8641 34.18 0.9481
OISR [19] 3 34.72 0.9297 30.57 0.8470 29.29 0.8103 28.95 0.8680 - -
SAN [10] 3 34.75 0.9300 30.59 0.8476 29.33 0.8112 28.93 0.8671 34.30 0.9494
EDSR [25] 3 34.65 0.9280 30.52 0.8462 29.25 0.8093 28.80 0.8653 34.17 0.9476
PA-EDSR (ours) 3 34.84 0.9306 30.71 0.8488 29.33 0.8119 29.24 0.8736 34.46 0.9505
LapSRN [22] 4 31.54 0.8850 28.19 0.7720 27.32 0.7270 25.21 0.7560 29.09 0.8900
MemNet [36] 4 31.74 0.8893 28.26 0.7723 27.40 0.7281 25.50 0.7630 29.42 0.8942
SRMDNF [45] 4 31.96 0.8925 28.35 0.7787 27.49 0.7337 25.68 0.7731 30.09 0.9024
DBPN [17] 4 32.47 0.8980 28.82 0.7860 27.72 0.7400 26.38 0.7946 30.91 0.9137
RDN [48] 4 32.47 0.8990 28.81 0.7871 27.72 0.7419 26.61 0.8028 31.00 0.9151
RCAN [46] 4 32.63 0.9002 28.87 0.7889 27.77 0.7436 26.82 0.8087 31.22 0.9173
NLRN [26] 4 31.92 0.8916 28.36 0.7745 27.48 0.7306 25.79 0.7729 - -
SRFBN [24] 4 32.47 0.8983 28.81 0.7868 27.72 0.7409 26.60 0.8015 31.15 0.9160
OISR [19] 4 32.53 0.8992 28.86 0.7878 27.75 0.7428 26.79 0.8068 - -
SAN [10] 4 32.64 0.9003 28.92 0.7888 27.78 0.7436 26.79 0.8068 31.18 0.9169
EDSR [25] 4 32.46 0.8968 28.80 0.7876 27.71 0.7420 26.64 0.8033 31.02 0.9148
PA-EDSR (ours) 4 32.65 0.9006 28.87 0.7891 27.76 0.7445 27.01 0.8140 31.29 0.9194
Table 5: Quantitative results on SR benchmark datasets
valign=t
Urban100 ():
img_044
valign=t
HR Bicubic LapSRN [22] EDSR [25] DBPN [17]
OISR [19] RDN [48] RCAN [46] SAN [10] Ours
valign=t
Urban100 ():
img_048
valign=t
HR Bicubic LapSRN [22] EDSR [25] DBPN [17]
OISR [19] RDN [48] RCAN [46] SAN [10] Ours
valign=t
Urban100 ():
img_058
valign=t
HR Bicubic LapSRN [22] EDSR [25] DBPN [17]
OISR [19] RDN [48] RCAN [46] SAN [10] Ours
Figure 7: Visual comparison for SR on Urban100 dataset

4.7 Image Super Resolution

To further demonstrate the generality of pyramid attention, we present additional image super resolution experiments. Similar to previous settings, we build our network on top of the widely used plain ResNet based EDSR [25], where a single pyramid attention block is inserted after 16th residual block. We compare it with 11 state-of-the-art approches: LapSRN [22], MemNet [36], SRMDNF [43], DBPN [17], RDN [48], RCAN [46], NLRN [26], SRFBN [24], OISR [19], and SAN [10].

We report experiment results in Table 5. Without any architectural engineering, our simple PA-EDSR achieves best performance on almost all benchmarks and scales. The significant improvements over EDSR demonstrate the effectiveness of the proposed attention operation. One may notice that the improvements are not limit to Urban100, where images contain apparent structural recurrences. We also observed considerable performance gain on natural image datasets: Set5, Set14 and B100. This is accorded with previous observation that cross-scale self-recurrences is a common property for natural images [16]. The proposed operation is a generic block and therefore can be also integrated into other SR networks to further boost SR performance.

Visual results are shown in Fig. 7. Our PA-EDSR reconstructs the most accurate image details, leading to better visual pleasing results.

4.8 Visualization of Attention Map.

valign=t
HQ Level1 Level2 Level3 Level4 Level5
Figure 8: Visualization of correlation maps of pyramid attention. Maps are rescaled to same size for visualization purpose. Brighter color indicates higher engagement. One can see that the attention focuses on different locations at each scale, indicating the module is able to exploit multi-scale recurrences to improve restoration

To fully demonstrate that our pyramid attention captures multi-scale correlations, we visualize its attention map in Fig. 8. For illustration purpose, the selected images contain abundant self-exemplars at different locations and scales.

From Fig. 8, we find the attention maps follow distinct distributions over scales, demonstrating that our attention is able to focus on informative regions at multiple scales. It is interesting to point out, as level increases, the most engaged patches move downwards. This is in line with that larger patterns, such as window, appear at bottom in selected images. By capturing multi-scale correlations, the network managed to utilize these informative patches to improve restoration.

baseline N-L attention Pyramid attention
PSNR 30.86 31.14 31.29
Table 6: Effects of pyramid attention on Urban100 ()
baseline pixel-wise block-wise
PSNR 30.86 31.14 31.21
Table 7: Comparison between pixel-wise matching and block-wise matching on Urban100 ()
baseline 1-level 2-level 3-level 4-level 5-level
PSNR 30.86 31.21 31.23 31.25 31.28 31.29
Table 8: Ablation study on pyramid levels

4.9 Ablation study

Pyramid Attention Module. To verify the effectiveness of pyramid attention, we conduct control experiments on image denosing tasks (). The baseline module is constructed by removing the attention block, resulting in a simple ResNet. We set the number of residual blocks in this experiment. In Table 6, baseline achieves 30.86 dB on Urban100. To compare with classic non-local operations, we further construct a non-local baseline by replacing the pyramid attention with non-local attention. From the result in column 2, we can see that single-scale non-local operation is able to bring improvements. However, the best performance is achieved by using the proposed pyramid attention, with brings 0.43 dB over the baseline and 0.15 dB over classical non-local model. This indicates the proposed module can be served as a better alternative to model long-range dependency than current non-local operation. Such module can exploit informative corresponds exist at multiple image scales, which is of central importance for reconstructing more faithful images.

Matching: Pixel-wise v.s. Block-wise. While classic non-local attentions compute pixel-wise feature correlation, we find block-wise matching yields much better restorations in practice. To demonstrate this, we compare conventional non-local operations with its patch-based alternative, where the patch size is set to be . As shown in Table 7, when using block matching, the performance is improve from 31.14 dB to 31.21 dB. This is because block-matching involves extra similarity constraint on nearby pixels, thus can better distinguish highly relevant correspondences from noisy ones. These results demonstrate that small patches are indeed more robust descriptors for similarity measurements.

Pre
Mid
Post
PSNR 30.86 31.07 31.29 31.18 31.33 31.33 31.39 31.48
Table 9: Results for models with pyramid attention inserted at different residual blocks on Urban100 ()

Feature Pyramid Levels. As discussed above, the key difference between classic non-local operation and pyramid attention is that our module allows the network to utilize correspondences at multiple scales. Here we investigate the influences of pyramid levels. We conduct control experiments by gradually adding more levels to the feature pyramid. The final pyramid consists of 5 layers with scale factors . As shown in Table 8, when more layers are added, we observe constant performance gains. In column 6, the best performance is obtained when all levels are included. This is mainly because, as searching space is progressively expanded to more scales, the attention unit has higher possibilities to find a more informative correspondences that beyond original image scale. These results indicate that multi-scale relationship is essential for improving restoration.

Position in Neural Networks. Where should we add pyramid attention to the networks, in order to fully unleash its potential? Table 9 compares pyramid attentions inserted to different stages of a ResNet. Here we consider 3 typical positions: after the 1st residual block representing preprocessing, after the 8th residual block, which is the middle of the network, and after the last residual block representing post-processing. From the first 4 columns, we find that inserting our module at any stages bring evident improvements. The largest performance gain is achieved by inserting it at middle. Moreover, when multiple modules are combined, the restoration quality further boosts. The best result is achieved by including modules at all three positions.

5 Conclusion

In this paper, we proposed a simple and generic pyramid attention for image restoration. The module generalizes classic self-attention to capture non-local relationships at multiple image scales. It is fully differentiable and can be used into any architectures. We demonstrate that modeling multi-scale correspondences brings significant improvements for the general image restoration tasks of image denosing, demosaicing, compression artifacts reduction and super resolution. On all tasks, a simple backbone with one pyramid attention achieves superior restoration accuracy over prior state-of-the-art approaches. We believe pyramid attention should be used as a common building block in future neural networks.

References

  • [1] Y. Bahat, N. Efrat, and M. Irani (2017) Non-uniform blind deblurring by reblurring. In

    Proceedings of the IEEE International Conference on Computer Vision

    ,
    pp. 3286–3294. Cited by: §2.
  • [2] Y. Bahat and M. Irani (2016) Blind dehazing using internal patch recurrence. In 2016 IEEE International Conference on Computational Photography (ICCP), pp. 1–9. Cited by: §1, §1, §1, §2.
  • [3] A. Buades, B. Coll, and J. Morel (2005) A non-local algorithm for image denoising. In CVPR, Cited by: §1, §1, §2, §3.4.
  • [4] A. Buades, B. Coll, and J. Morel (2011) Non-local means denoising. Image Processing On Line 1, pp. 208–212. Cited by: §1.
  • [5] Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu (2019) Gcnet: non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0. Cited by: §2.
  • [6] C. Chen, Q. Chen, J. Xu, and V. Koltun (2018) Learning to see in the dark. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 3291–3300. Cited by: §1.
  • [7] Y. Chen and T. Pock (2017) Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration. TPAMI. Cited by: §1, Figure 4, §4.3, §4.5.
  • [8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007) Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In ICIP, Cited by: §4.3.
  • [9] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. TIP. Cited by: §2.
  • [10] T. Dai, J. Cai, Y. Zhang, S. Xia, and L. Zhang (2019) Second-order attention network for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11065–11074. Cited by: §2, §2, Figure 7, §4.7, Table 5.
  • [11] C. Dong, Y. Deng, C. Change Loy, and X. Tang (2015) Compression artifacts reduction by a deep convolutional network. In ICCV, Cited by: §1, §2, Figure 6, §4.5.
  • [12] Y. Fan, J. Yu, D. Liu, and T. S. Huang (2019) Scale-wise convolution for image restoration. arXiv preprint arXiv:1912.09028. Cited by: §2.
  • [13] A. Foi, V. Katkovnik, and K. Egiazarian (2007-05) Pointwise shape-adaptive dct for high-quality denoising and deblocking of grayscale and color images. TIP. Cited by: §4.5.
  • [14] G. Freedman and R. Fattal (2011) Image and video upscaling from local self-examples. ACM Transactions on Graphics (TOG) 30 (2), pp. 1–11. Cited by: §1, §2.
  • [15] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu (2019) Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3146–3154. Cited by: §2.
  • [16] D. Glasner, S. Bagon, and M. Irani (2009) Super-resolution from a single image. In 2009 IEEE 12th international conference on computer vision, pp. 349–356. Cited by: §1, §1, §2, §4.7.
  • [17] M. Haris, G. Shakhnarovich, and N. Ukita (2018) Deep back-projection networks for super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1664–1673. Cited by: §2, Figure 7, §4.7, Table 5.
  • [18] K. He, J. Sun, and X. Tang (2010) Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence 33 (12), pp. 2341–2353. Cited by: §1.
  • [19] X. He, Z. Mo, P. Wang, Y. Liu, M. Yang, and J. Cheng (2019) Ode-inspired network design for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1732–1741. Cited by: Figure 7, §4.7, Table 5.
  • [20] J. Huang, A. Singh, and N. Ahuja (2015) Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5197–5206. Cited by: §2, §4.3.
  • [21] J. Kim, J. Kwon Lee, and K. Mu Lee (2016) Accurate image super-resolution using very deep convolutional networks. In CVPR, Cited by: §1.
  • [22] W. Lai, J. Huang, N. Ahuja, and M. Yang (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR, Cited by: §1, §2, Figure 7, §4.7, Table 5.
  • [23] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng (2017) Aod-net: all-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4770–4778. Cited by: §1.
  • [24] Z. Li, J. Yang, Z. Liu, X. Yang, G. Jeon, and W. Wu (2019) Feedback network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3867–3876. Cited by: §4.7, Table 5.
  • [25] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee (2017) Enhanced deep residual networks for single image super-resolution. In CVPRW, Cited by: §2, §3.5, Figure 7, §4.7, Table 5.
  • [26] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang (2018) Non-local recurrent network for image restoration. In NeurIPS, Cited by: §1, §1, §1, §2, §2, §3.4, §3.4, §4.7, Table 5.
  • [27] O. Lotan and M. Irani (2016) Needle-match: reliable patch matching under high uncertainty. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 439–448. Cited by: §1.
  • [28] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman (2009) Non-local sparse models for image restoration. In 2009 IEEE 12th international conference on computer vision, pp. 2272–2279. Cited by: §2.
  • [29] X. Mao, C. Shen, and Y. Yang (2016) Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In NeurIPS, Cited by: Figure 4, §4.3.
  • [30] D. Martin, C. Fowlkes, D. Tal, and J. Malik (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, Cited by: §4.3.
  • [31] T. Michaeli and M. Irani (2014) Blind deblurring using internal patch recurrence. In European Conference on Computer Vision, pp. 783–798. Cited by: §1, §1, §1, §2.
  • [32] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.2.
  • [33] S. Roth and M. J. Black (2005) Fields of experts: a framework for learning image priors. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, pp. 860–867. Cited by: §1.
  • [34] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik (2005) LIVE image quality assessment database release 2 (2005). Cited by: §4.5.
  • [35] A. Singh and N. Ahuja (2014) Super-resolution using sub-band self-similarity. In Asian Conference on Computer Vision, pp. 552–568. Cited by: §1.
  • [36] Y. Tai, J. Yang, X. Liu, and C. Xu (2017) MemNet: a persistent memory network for image restoration. In ICCV, Cited by: §1, Figure 4, §4.3, §4.7, Table 5.
  • [37] R. Timofte, E. Agustsson, L. Van Gool, M. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, et al. (2017) Ntire 2017 challenge on single image super-resolution: methods and results. In CVPRW, Cited by: §4.1.
  • [38] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol (2008)

    Extracting and composing robust features with denoising autoencoders

    .
    In ICML, Cited by: §2.
  • [39] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In CVPR, Cited by: §1, §2, §3.4.
  • [40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. TIP. Cited by: §4.1.
  • [41] B. N. Xia, Y. Gong, Y. Zhang, and C. Poellabauer (2019) Second-order non-local attention networks for person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3760–3769. Cited by: §2.
  • [42] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. TIP. Cited by: §1, §2, Figure 4, Figure 6, §4.3, §4.5.
  • [43] K. Zhang, W. Zuo, S. Gu, and L. Zhang (2017) Learning deep cnn denoiser prior for image restoration. In CVPR, Cited by: §1, §2, Figure 4, Figure 5, §4.3, §4.4, §4.7.
  • [44] K. Zhang, W. Zuo, and L. Zhang (2017) FFDNet: toward a fast and flexible solution for cnn based image denoising. arXiv preprint arXiv:1710.04026. Cited by: Figure 4, §4.3.
  • [45] K. Zhang, W. Zuo, and L. Zhang (2018) Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262–3271. Cited by: Table 5.
  • [46] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In ECCV, Cited by: §2, Figure 7, §4.7, Table 5.
  • [47] Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu (2019) Residual non-local attention networks for image restoration. In ICLR, Cited by: §1, §1, §2, §2, Figure 4, Figure 5, Figure 6, §4.1, §4.3, §4.4, §4.5.
  • [48] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image super-resolution. In CVPR, Cited by: §2, Figure 7, §4.7, Table 5.
  • [49] M. Zontak and M. Irani (2011) Internal statistics of a single natural image. In CVPR 2011, pp. 977–984. Cited by: §1, §2.
  • [50] M. Zontak, I. Mosseri, and M. Irani (2013) Separating signal from noise using patch recurrence across scales. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1195–1202. Cited by: §1, §1, §1, §1, §2.
  • [51] D. Zoran and Y. Weiss (2011) From learning models of natural image patches to whole image restoration. In 2011 International Conference on Computer Vision, pp. 479–486. Cited by: §1.