PyTorch code for our paper "Pyramid Attention for Image Restoration" with new SOTA results on multiple tasks
Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to borrow clean signals from their "clean" correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality.READ FULL TEXT VIEW PDF
Image deblurring has seen a great improvement with the development of de...
In this paper, we present a general framework for low-level vision tasks...
We present a novel framework, Spatial Pyramid Attention Network (SPAN) f...
While scale-invariant modeling has substantially boosted the performance...
Existing neural networks proposed for low-level image processing tasks a...
We propose a principled convolutional neural pyramid (CNP) framework for...
CNNs, initially inspired by human vision, differ in a key way: they samp...
PyTorch code for our paper "Pyramid Attention for Image Restoration" with new SOTA results on multiple tasks
Image restoration algorithms aim to recover a high-quality image from the contaminated counterpart, and is viewed as an ill-posed problem due to the irreversible degradation processes. They have many applications depending on the type of corruptions, for example, image denoising [42, 47, 26], demosaicing [43, 47], compression artifacts reduction [11, 7, 42], super resolution [21, 22, 36] and many others [23, 18, 6]. To restore missing information in the contaminated image, a variety of approaches based on leveraging image priors have been proposed [3, 50, 33, 51].
Among these approaches, the prior of self-similarity in an image is widely explored and proven to be important. For example, non-local mean filtering  uses self-similarity prior to reduce corruptions, which averages similar patches within the image. This notion of non-local pattern repetition was then extended to across multiple scales and demonstrated to be a strong property for natural images [49, 16]. Several self-similarity based approaches [16, 14, 35] were first proposed for image super resolution, where they restore image details by borrowing high-frequency details from self recurrences at larger scales. The idea was then explored in other restoration tasks. For example, in image denoising, its power is further strengthened by observing that noise reduces drastically at coarser scales . This motivates many advanced approaches [50, 31] to restore clean signals by finding “noisy-free” recurrences in a built image-space pyramid, yielding high-quality reconstructions. The idea of utilizing multi-scale non-local prior has achieved great successes in various restoration tasks [2, 50, 31, 27].
Recently deep neural networks trained for image restoration have made unprecedented progress. Following the importance of self-similarity prior, most recent approaches based on neural networks [47, 26] adapt non-local operations into their networks, following the non-local neural networks . In a non-local block, a response is calculated as a weighted sum over all pixel-wise features on the feature map, thus it can obtain long-range information. Such a module was initially designed for high-level recognition tasks and proven to be also effective in low-level vision problems [47, 26].
However, these approaches which adapt the naive self-attention module to low-level tasks have certain limitations. First, to our best knowledge, multi-scale non-local prior is never explored. It has been demonstrated in the literature that cross-scale self-similarity can bring impressive benefits for image restoration [50, 2, 31, 16]. Unlike high-level semantic features for recognition which makes not too much difference across scales, low-level features represent richer details, patterns, and textures at different scales. Nevertheless, the leading non-local self-attention fails to capture the useful correspondences that occur at different scales. Second, pixel-wise matching used in the self-attention module is usually noisy for low-level vision tasks, thus reducing performance. Intuitively, enlarging the searching space raises possibility for finding better matches, but it is not true for the existing self-attention modules . Unlike high-level feature maps where numerous dimension reduction operations are employed, image restoration networks often maintain the input spatial size. Therefore, feature is only highly relevant to a localized region, making them easily affected by noisy signals. This is in line with conventional non-local filtering, where pixel-wise matching performs much worse than block matching .
In this paper, we present a novel non-local pyramid attention as a simple and generic building block for exhaustively capturing long-range dependencies, as shown in Fig. 1. The proposed attention takes full advantages of traditional non-local operations but is designed to better accord with the nature of image restoration. Specifically, the original search space is largely extended from a single feature map to a multi-scale feature pyramid. The proposed operation exhaustively evaluates correlation among features across multiple specified scales by searching over the entire pyramid. This brings several advantages: (1) It generalizes existing non-local operation, where the original searching space is inherently covered in the lowest pyramid level. (2) The long-range dependency between relevant features of different sizes is explicitly modeled. Since the operation is fully differentiable, it can be jointly optimized with networks through back propagation. (3) Similar to traditional approaches [50, 2, 31]
, one may expect noisy signals in features can be drastically reduced via rescaling to coarser pyramid level via operations like bi-cubic interpolation. This allows the network to find “clean signal” from multi-scale correspondences. Next, we enhance the robustness of correlation measurement by involving neighboring features into computation, inspired by traditional block matching strategy. Region-to-region matching imposes additional similarity constraints on the neighborhood. As such, the module can effectively single out highly relevant correspondences while suppressing noisy ones.
We demonstrate the power of non-local pyramid attention on various image restoration tasks: image denoising, image demosaicing, compression artifacts reduction and image super resolution. In all tasks, a single pyramid attention, which is our basic unit, can model long-range dependency without scale restriction, in a feed forward manner. With one attention block inserted into a very simple backbone network, the model achieves significantly better results than the latest state-of-the-art approach with well-engineered architecture and multiple non-local attention units. In addition, we also conduct extensive ablation studies to analyze our design choices. All these evidences demonstrate our module is a better alternative of current non-local operation and can be used as a fundamental unit in neural networks for generic image restoration.
Self-similarity Prior for Image Restoration. Self-similarity property that small patterns tend to recur within a image powers natural images with strong self-predictive ability [2, 16, 49], which forms a basis for many classical image restoration methods [49, 50, 2, 31, 20]. The initial work, non-local mean filtering , globally averages similar patches for image denoising. Later on, Dabov et al  introduced BM3D, where repetitive patterns are grouped into 3D arrays to be jointly processed by collaborative filters. In LSSC , self-similarity property is combined with sparse dictionary learning for both denoising and demosaicing. This “fractal like” characteristic was further strengthened to across different scales and shown to be a very strong property for natural images [16, 49]. To enjoy cross-scale redundancy, self-similarity based approaches were proposed for image super resolution [16, 14, 20], where high frequency information is retrieved uniquely from internal multi-scale recurrences. Observing that corruptions drop drastically at coarser scales, Zontak  demonstrated that a clean version of noisy patches (99%) exists at coarser level of the original image. This idea was developed into their denoising algorithm, which achieved promising results. The cross-scale self similarity is also of central importance for many image deblurring [31, 1] and image dehazing approaches .
Non-local Attention.Non-local attention in deep CNNs was initially proposed by Wang et al  for video classification. In their networks, non-local units are placed on high-level, sub-sampled feature maps to compute long-range semantic correlations. By assigning weights to features at all locations, it allows the network to focus on more informative areas. Adapting non-local operation also showed considerable improvements in other high-level tasks, such as object detection , semantic segmentation  and person Re-id . For image restoration, recent approaches, such as NLRN , RNAN  and SAN , incorporate non-local operations in their networks. However, without careful modification, their performance are reduced by involving many ill matches during the pixel-wise feature matching in attention units.
Deep CNNs for Image Restoration. Adopting deep-CNNs for image restoration has shown evident improvements by embracing their representative power. In the early work, Vincent et al  proposed to use stacked auto-encoder for image denoising. Later, ARCNN was introduced by Dong et al  for compression artifacts reduction. Zhang et al 
proposed DnCNN for image denosing, which uses advanced techniques like residual learning and batch normalization to boost performance. In IRCNN, a learned set of CNNs are used as denoising prior for other image restoration tasks. For image super resolution, extensive efforts have been spent into designing advanced architectures and learning methods, such as progressive super resolution , residual  and dense connection , back-projection , scale-invariant convolution  and channel attention . Recently, most state-of-the-art approaches [26, 47, 10] incorporate non-local attention into networks to further boost representation ability. Although extensive efforts have been made in architectural engineering, existing methods relying on convolution and non-local operation can only exploit information at a same scale.
Both convolution operation and non-local attention are restricted to same-scale information. In this section, we introduce the novel pyramid attention, which can deal with non-local dependency across multiple scales, as a generalization of non-local operations.
Non-local attention calculates a response by averaging features over an entire image, as shown in Fig. 2 (a). Formally, given an input feature map , this operation is defined as:
where , are index on the input and output respectively. The function computes pair-wise affinity between two input features. is a feature transformation function that generates a new representation of . The output response obtains information from all features by explicitly summing over all positions and is normalized by a scalar function . While the above operation manages to capture long-range correlation, information is extracted at a single scale. As a result, it fails to exploit relationships to many more informative areas of distinctive spatial sizes.
To break this scale constraint, we propose pyramid attention (Fig. 2 (c)), which captures correlations across scales. In pyramid attention, affinities are computed between a target feature and regions. Therefore, a response feature is a weighted sum over multi-scale correspondences within the input map. Formally, given a series of scale factor , pyramid attention can be expressed as
Here represents a neighborhood centred at index on input .
In other words, pyramid attention behaves in a non-local multi-scale way by explicitly processing larger regions with sizes specified by scale pyramid at all position . Note that when only a single scale factor is specified, the proposed attention degrades to current non-local operation. Hence, our approach is a more generic operation that allows the network to fully enjoy the predictive power of natural images.
Finding a generic solution, which models cross-scale relationships, is a non-trivial problem and requires carefully engineering. In the following section, we first address the non-local operation between two scales and then extend it to pyramid scales.
|(a) Non-local attention||(b) Scale agnostic attention||(c) Pyramid attention|
Given an extra scale factor , how to evaluate the correlation between and and aggregate information from to form are two key steps. Here, the major difficulty comes from misalignment in their spatial dimensions. Common similarity measurements, such as dot product and embedded Gaussian, only accept features with identical dimensions, thus are infeasible in this case.
To mitigate the above problem, we propose to squeeze the spatial information of into a single region descriptor. This step is conducted by down-scaling the region in a pixel feature . As we need search over the entire feature map, we can therefore directly down-scale the original input (HW) to obtain a descriptor map (). The correlation between and is then represented by and the region descriptor . Formally, scale agnostic attention (Fig. 2 (b)) is formulated as
This operation brings additional advantages. As discussed in Section 1, downscaling regions into coarser descriptors reduces noisy levels. On the other hand, since the cross-scale recurrence represents a similar content, the structure information will be still well-preserved after down-scaling. Combing these two facts, region descriptors can serve as a “cleaner version” of the target feature and a better alternative of noisy patch matches at the original scale.
To make full use of self-predictive power, the scale agnostic attention can be extended to pyramid attention, which computes correlations across multiple scales. In such units, pixel-region correspondences are captured over an entire feature pyramid. Specifically, given a series of scales , it forms a feature pyramid , where () is a region descriptor map of the input , obtained by down-scaling operation. In such case, the correlations between any pyramid levels and the original input can be seen as a scale agnostic attention. Therefore, the pyramid attention is defined as:
The cross-scale modeling ability is due to the fact that region descriptor at different levels summarizes information over regions of various sizes. When they are copied back to original position , non-local multi-scale information is fused together to form a new response, which intuitively contains richer and more faithful information than the matches from a single scale.
Choices of , and . There are many well-explored choices for pair-wise function [39, 26], such as Gaussian, embedded Gaussian, dot pot and feature concatenation. In this paper, we use embedded Gaussian to follow previous best practices : , where and .
For feature transformation function , we use a simple linear embedding: . Finally, we set . By specifying above instantiations, the term is equivalent to softmax over all possible positions in the pyramid.
Patch based region-to-region attention As discussed in Section 1, information contained in features (for image restoration tasks) is very localized. Consequently, the matching process is usually affected by noisy signals. Previous approach relieves this problem by restriction search space to local region . However, this also prevents them from finding better correspondences that are far away from current position.
where the neighborhood is specified by . This add a stronger constraint on matching content that two features are highly correlated if and only if their neighborhood are highly similar as well. The block-wise matching allows the network to pay more attention on relevant areas while suppressing unrelated ones.
Implementation. The proposed pyramid attention is implemented using basic convolution and deconvolution operations, as shown in Fig. 3. In practice, softmax matching scores can be expressed as convolution over the input using patches extracted from the feature pyramid. To obtain a final response, we extract patches from the transformed feature map (by ) to conduct a deconvolution over the matching score. Note that the proposed operation is fully convolutional, differential and accept any input resolutions, which can be flexibly embedded into many standard architectures.
To show the effectiveness of our pyramid attention, we choose a simple ResNet as our backbone without any architectural engineering. The proposed image restoration network is illustrated in Fig. 3. We remove batch normalization in each residual block, following the practice in . Similar to many restoration networks, we add a global pathway from the first feature to the last one, which encourages the network bypass low frequency information. We inset a single pyramid attention in the middle of the network.
Given a set of N-paired training images -, we optimize the reconstruction loss between and ,
where represents the entire PANet and is a set of learnable parameters.
The proposed pyramid attention and PANet are evaluated on major image restoration tasks: image denoising, demosaicing and compression artifacts reduction. For fair comparison, we follow the setting specified by RNAN  for image denoising, demosaicing, and compression artifacts reduction. We use DIV2K  as our training set, which contains 800 high quality images. We report results on standard benchmarks using PSNR and/or SSIM .
For pyramid attention, we set the scale factors , so that we construct a 5 level feature pyramid within the attention block. To build the pyramid, we use simple Bicubic interpolation to rescale feature maps. While computing correlations, we use small patches centered at target features. The proposed PANet contains 80 residual blocks with one pyramid attention module inserted after the 40- block. All features have 64 channels, except for those used in embedded Gaussian, where the channel number is reduced to 32.
During training, each mini-batch consists of 16 patches with size . We augment training images using vertical/horizontal flipping and random rotation of , , and . The model is optimized by Adam optimizer with , , and . The learning rate is initialized to32] and trained on Nvidia TITANX GPUs.
Following RNAN , PANet is evaluated on standard benchmarks for image denoising: Kodak24 (http://r0k.us/graphics/kodak/), BSD68 , and Urban100 . We create noisy images by adding AWGN noises with . We compare our approach with 8 state-of-the-art methods: CBM3D , TNRD , RED , DnCNN , MemNet , IRCNN , FFDNet , and RNAN .
As shown in Table 1, PANet achieved best results on almost all datasets and noisy levels. In particular, our approach yielded better results than prior state-of-the-art RNAN, which has well-engineered network and multiple non-local attention blocks. These results show that, even with only one additional pyramid attention, a simple ResNet can significantly boost restoration quality. One may notice that PANet performs significantly well on Urban100 datasets, with more than 0.3 dB improvements over RNAN on all noisy levels. This is because pyramid attention allows the network to explicitly capture abundant cross-scale self-exemplars in urban scenes. In contrast, traditional non-local attention fails to explore those multi-scale relationships.
We further present qualitative evaluations on BSD68 and Urban100. The results are shown in Fig. 4. By relying on a single learned pyramid attention, PANet managed to produce the most faithful restoration results than others.
For image demosaicing, we conduct evaluations on Kodak24, McMaster , BSD68, and Urban100, following settings in RNAN . We compare our approach with recent state-of-the-arts IRCNN  and RNAN . As shown in Table 2, mosaic corruption significantly reduced image quality in terms of PSNR and SSIM. RNAN and IRCNN could remove these corruptions to some degree and lead to relatively high-quality restoration. Our approach yields the best reconstruction, demonstrating advantages of exploiting multi-scale correlations.
The visual results are shown in Fig. 5. While IRCNN and RNAN can effectively reduce mosaic corruption to some degree, their results still contain evident artifacts. By relying on pyramid attention, the proposed PANet removed most of blocking artifacts and restores the more accurate color, as compared to IRCNN and RNAN.
For image compression artifacts reduction (CAR), we compare our method with 5 most recent approaches: SA-DCT , ARCNN , TNRD , DnCNN , and RNAN . We present results on LIVE1  and Classic5 , following the same settings in RNAN. To obtain the low-quality compressed images, we follow the standard JPEG compression process and use Matlab JPEG encoder with quality . For fair comparison, the results are only evaluated on Y channel in YCbCr Space.
The quantitative evaluation are reported in Table 3. By incorporating pyramid attention, PANet obtains best results on both LIVE1 and Classic5 with all quality levels. We further present visual comparisons on the most challenging quality level in Fig. 6. One can see that the proposed approach successfully reduced compression artifacts and recovered the most image details. This is mainly because our PANet captures non-local relationships in a multi-scale way, helping to reconstruct more faithful details.
We report our model size and compare it with other advanced image denoising approaches in Table 4. One can see that PANet achieves the best performance with a lighter much simpler architecture, as compared to the prior state-of-the-art approach RNAN. Such observations indicate the great advantages brought by our pyramid attention module. In practice, our proposed pyramid attention module can be inserted in related networks.
To further demonstrate the generality of pyramid attention, we present additional image super resolution experiments. Similar to previous settings, we build our network on top of the widely used plain ResNet based EDSR , where a single pyramid attention block is inserted after 16th residual block. We compare it with 11 state-of-the-art approches: LapSRN , MemNet , SRMDNF , DBPN , RDN , RCAN , NLRN , SRFBN , OISR , and SAN .
We report experiment results in Table 5. Without any architectural engineering, our simple PA-EDSR achieves best performance on almost all benchmarks and scales. The significant improvements over EDSR demonstrate the effectiveness of the proposed attention operation. One may notice that the improvements are not limit to Urban100, where images contain apparent structural recurrences. We also observed considerable performance gain on natural image datasets: Set5, Set14 and B100. This is accorded with previous observation that cross-scale self-recurrences is a common property for natural images . The proposed operation is a generic block and therefore can be also integrated into other SR networks to further boost SR performance.
Visual results are shown in Fig. 7. Our PA-EDSR reconstructs the most accurate image details, leading to better visual pleasing results.
To fully demonstrate that our pyramid attention captures multi-scale correlations, we visualize its attention map in Fig. 8. For illustration purpose, the selected images contain abundant self-exemplars at different locations and scales.
From Fig. 8, we find the attention maps follow distinct distributions over scales, demonstrating that our attention is able to focus on informative regions at multiple scales. It is interesting to point out, as level increases, the most engaged patches move downwards. This is in line with that larger patterns, such as window, appear at bottom in selected images. By capturing multi-scale correlations, the network managed to utilize these informative patches to improve restoration.
|baseline||N-L attention||Pyramid attention|
Pyramid Attention Module. To verify the effectiveness of pyramid attention, we conduct control experiments on image denosing tasks (). The baseline module is constructed by removing the attention block, resulting in a simple ResNet. We set the number of residual blocks in this experiment. In Table 6, baseline achieves 30.86 dB on Urban100. To compare with classic non-local operations, we further construct a non-local baseline by replacing the pyramid attention with non-local attention. From the result in column 2, we can see that single-scale non-local operation is able to bring improvements. However, the best performance is achieved by using the proposed pyramid attention, with brings 0.43 dB over the baseline and 0.15 dB over classical non-local model. This indicates the proposed module can be served as a better alternative to model long-range dependency than current non-local operation. Such module can exploit informative corresponds exist at multiple image scales, which is of central importance for reconstructing more faithful images.
Matching: Pixel-wise v.s. Block-wise. While classic non-local attentions compute pixel-wise feature correlation, we find block-wise matching yields much better restorations in practice. To demonstrate this, we compare conventional non-local operations with its patch-based alternative, where the patch size is set to be . As shown in Table 7, when using block matching, the performance is improve from 31.14 dB to 31.21 dB. This is because block-matching involves extra similarity constraint on nearby pixels, thus can better distinguish highly relevant correspondences from noisy ones. These results demonstrate that small patches are indeed more robust descriptors for similarity measurements.
Feature Pyramid Levels. As discussed above, the key difference between classic non-local operation and pyramid attention is that our module allows the network to utilize correspondences at multiple scales. Here we investigate the influences of pyramid levels. We conduct control experiments by gradually adding more levels to the feature pyramid. The final pyramid consists of 5 layers with scale factors . As shown in Table 8, when more layers are added, we observe constant performance gains. In column 6, the best performance is obtained when all levels are included. This is mainly because, as searching space is progressively expanded to more scales, the attention unit has higher possibilities to find a more informative correspondences that beyond original image scale. These results indicate that multi-scale relationship is essential for improving restoration.
Position in Neural Networks. Where should we add pyramid attention to the networks, in order to fully unleash its potential? Table 9 compares pyramid attentions inserted to different stages of a ResNet. Here we consider 3 typical positions: after the 1st residual block representing preprocessing, after the 8th residual block, which is the middle of the network, and after the last residual block representing post-processing. From the first 4 columns, we find that inserting our module at any stages bring evident improvements. The largest performance gain is achieved by inserting it at middle. Moreover, when multiple modules are combined, the restoration quality further boosts. The best result is achieved by including modules at all three positions.
In this paper, we proposed a simple and generic pyramid attention for image restoration. The module generalizes classic self-attention to capture non-local relationships at multiple image scales. It is fully differentiable and can be used into any architectures. We demonstrate that modeling multi-scale correspondences brings significant improvements for the general image restoration tasks of image denosing, demosaicing, compression artifacts reduction and super resolution. On all tasks, a simple backbone with one pyramid attention achieves superior restoration accuracy over prior state-of-the-art approaches. We believe pyramid attention should be used as a common building block in future neural networks.
Proceedings of the IEEE International Conference on Computer Vision, pp. 3286–3294. Cited by: §2.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3291–3300. Cited by: §1.
Extracting and composing robust features with denoising autoencoders. In ICML, Cited by: §2.