What Hinders Perceptual Quality of PSNR-oriented Methods?

01/04/2022
by   Tianshuo Xu, et al.
Xiamen University
7

In this paper, we discover two factors that inhibit POMs from achieving high perceptual quality: 1) center-oriented optimization (COO) problem and 2) model's low-frequency tendency. First, POMs tend to generate an SR image whose position in the feature space is closest to the distribution center of all potential high-resolution (HR) images, resulting in such POMs losing high-frequency details. Second, 90% area of an image consists of low-frequency signals; in contrast, human perception relies on an image's high-frequency details. However, POMs apply the same calculation to process different-frequency areas, so that POMs tend to restore the low-frequency regions. Based on these two factors, we propose a Detail Enhanced Contrastive Loss (DECLoss), by combining a high-frequency enhancement module and spatial contrastive learning module, to reduce the influence of the COO problem and low-Frequency tendency. Experimental results show the efficiency and effectiveness when applying DECLoss on several regular SR models. E.g, in EDSR, our proposed method achieves 3.60× faster learning speed compared to a GAN-based method with a subtle degradation in visual quality. In addition, our final results show that an SR network equipped with our DECLoss generates more realistic and visually pleasing textures compared to state-of-the-art methods. material and will be made publicly available in the future.

READ FULL TEXT VIEW PDF

page 3

page 4

page 6

page 8

12/13/2021

LC-FDNet: Learned Lossless Image Compression with Frequency Decomposition Network

Recent learning-based lossless image compression methods encode an image...
11/21/2021

FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete Cosine Transform

Single image super-resolution(SISR) is an ill-posed problem that aims to...
12/17/2014

High Frequency Content based Stimulus for Perceptual Sharpness Assessment in Natural Images

A blind approach to evaluate the perceptual sharpness present in a natur...
04/25/2019

Deep SR-ITM: Joint Learning of Super-resolution and Inverse Tone-Mapping for 4K UHD HDR Applications

Recent modern displays are now able to render high dynamic range (HDR), ...
03/23/2021

Generalizing Face Forgery Detection with High-frequency Features

Current face forgery detection methods achieve high accuracy under the w...
05/25/2021

High-Frequency aware Perceptual Image Enhancement

In this paper, we introduce a novel deep neural network suitable for mul...
11/07/2021

A-PixelHop: A Green, Robust and Explainable Fake-Image Detector

A novel method for detecting CNN-generated images, called Attentive Pixe...

1 Introduction

Figure 1: Schematic diagram of the COO problem. The gray dotted arrow denotes the correct mapping, and green arrow denotes the actual mapping under the COO problem. After applying our DECLoss, LR is divided into specific groups for more accurate mapping, so as to reduce this problem’s influences.

Image super-resolution (SR) aims to construct a high-resolution (HR) image from its low-resolution (LR) counterpart. SR is an important class of image processing and has been widely applied on real-world tasks [sr_medical, sr_medical_3d, sr_surveillance, sr_face]

. With the rapid development of deep learning, deep neuron networks have shown promising performance in SR

[srcnn, vdsr, srresnet, edsr, esrgan]. Many of them pre-defined SR as a pixel-level mapping from LR-to-SR and adopted MAE/MSEMean average error and mean square error, respectively.

loss to pursue high Peak Signal to Noise Ratio (PSNR), such models are termed as PSNR-oriented models (POMs).

Existing methods [edsr, esrgan] prefer to crop input LR images to patches during training, e.g.,

. However, in these cropping processes, these LR patches lost contextual information and high-frequency details during down-sampling operations. Such a phenomenon increases the probability that two regions are similar in their LR form, but the two regions are not identical before down-sampling. If the similarity of these two LR regions exceeds the model’s discriminative ability, the model must be optimized to map these two regions to a distribution center of their HR regions in the feature space (Fig. 

1(a)). Thus, these generated HR images contain very similar low-frequency signals (this is why their LR regions are similar), but their image details are different. As a result, these generated images are over-smoothed. We name such a phenomenon as “the center-oriented optimization (COO) problem”, hindering POMs from producing clear-detail images.

In addition, Pianykh et al. revealed that the key to human perception of image quality is the high-frequency information [human_perceptual]

. In contrast, POMs used MAE/MSE loss functions to assign the same calculation for areas with different frequencies; however, low-frequency-oriented regions of an image are over

. This situation drives POMs to tend to restore the low-frequency regions. Two mainstream types of existing approaches aimed at improving the perceptual quality of generated images, w.r.t. creating larger models or using GAN-based methods, to both indirectly weaken the COO problem and reduce the low-frequency tendency. In particular, larger-size models improved the upper bound of the discriminate ability, thus achieving accurate mappings from LR to HR. GAN-based methods generated fake details, whose distribution is close to details of the real-world HRs. However, both methods led to more calculations and latency. Moreover, GAN-based methods are unstable and time-consuming during training.

In this paper, we propose a Detail Enhanced Contrastive Loss (DECLoss) for SR networks to alleviate the COO problem and eliminate the low-frequency tendency. The main idea of DECLoss is to precisely map an LR region to an HR region, even if the HR region is not the LR’s ground truth. DECLoss consists of two calculation steps 1) high-frequency enhancement and 2) spatial contrastive learning. To cope with human perception, we first enhance image details by compensating the high-frequency information in the Fourier domain. We then reshape the SR patches and their corresponding HR patches to a sequence of mini-patches. For each SR mini-patch, within a training batch, we select all HR mini-patches that are really similar to the SR’s ground truth as positive samples, and rest HR mini-patches are negative samples. Next, we introduce contrastive learning to reduce the distance of positive samples and increase that of negative samples. Therefore, the SR model can accurately map each different region of LR to an HR region, so as to augment the details of the generated image (Fig. 1 (b)). It is the first time that we have combined contrastive learning with region similarity in SR. Without any adversarial operations, our DECLoss is stable and straightforward. In summary, our main contributions are three-fold:

1) Revealing Obstructions for High Perceptual Quality. We reveal the factors hindering POMs from generating high perceptual quality images, w.r.t. the center-oriented optimization problem and low-frequency tendency. In particular, it is the first time that we define the COO problem and quantify effects of this problem on SR models.

2) Proposing a Detail Enhanced Contrastive Loss (DECLoss). We propose a novel perceptual-driven loss function. Based on the image frequency transformation and contrastive learning, DECLoss alleviates the COO problem and eliminate the low-frequency tendency, to achieve the higher perceptual quality.

3) Extensive Experiments. Extensive experiments demonstrate the efficiency and effectiveness of our methods. Without any adversarial operations, our DECLoss, e.g., in EDSR [edsr], our DECLoss based method achieves equivalent performance with a GAN-based method with 3.60 faster; and combined with RaGAN [esrgan], our RRDB [esrgan] model outperform a variety of state-of-the-art methods.

2 Related Work

2.1 Image Super-Resolution

Image super-resolution is an important image restoration task in computer vision

[sr_dual, sr_unpaired, sr_meta, sr2021addersr, sr2021learning, sr2021masa]. Since SRCNN [srcnn]

first applied early convolutional neural networks to solve SR tasks. VDSR

[vdsr] also used a very deep network for SR. When He et al. [resnet] proposed ResNet for learning residuals, SRResNet [srresnet] introduced ResBlock [resnet] to expand the network depth. EDSR [edsr] further enhanced the efficiency of residual methods to advance SR results. DRCN [drcn], DRRN [drrn], and CARN [carn] also adopted ResBlock [resnet] for the recursive learning. Then, RRDB [esrgan] and RDN [rdn] used dense connections to augment information from former layers. RCAN [rcan] and RFA [rfa] explored an attention mechanism within a deep SR model.

To improve the perceptual quality and append missing details caused by the COO problem. Johnson et al. [perceptual_loss] proposed a perceptual loss; and Zhang et al. [lpips] proposed a training-based model, “LPIPS”, to measure perceptual distances of images. SRGAN [srresnet] first introduced a GAN-based method into an SR model, where an adversarial loss was used during training. ESRGAN [esrgan] made a significant progress for GAN-based methods. Images generated by ESRGAN looked more natural in texture. In contrast, Beby-GAN [beby_gan]

paid more attention to generating fake details, thereby further improving the perceptual quality. In particular, their research on the over-smoothed phenomenon of SR inspired us to discover of COO problem. Although GAN-based methods are effective in generating fake details, due to the two-stage gradient backpropagations and system I/O

Input/output, this paper mainly refers to the tensors.

, GAN-based methods cost too much in training.

2.2 Contrastive Learning

Contrastive Learning demonstrates its effectiveness in unsupervised representation learning [byol, contrast_representation, contrast_understanding] tasks. The contrastive learning method is to learn representations, which similar samples stay close to each other, while dissimilar samples are far apart [simclr, moco, caron2020unsupervised]. Many super-resolution approaches also applied contrastive learning to improve their robustness. E.g., DASR [wang2021unsupervised] applied contrastive learning in the degradation representations. Wang et al. [wang2021towards] proposed a distillation method with contrastive learning. Zhang et al. [zhang2021blind] used a bidirectional contrastive loss to identify both high-frequency and low-frequency features.

Figure 2: Visualized tiny detail regions by t-SNE [tsne]. HR1 and HR2 are similar HRs of ground truth (GT), L1 is trained by 1-norm loss, and DECLoss is our proposed metric loss. Influenced by COO problem, node is mapped to the center of adjacent HRs. In particular, DECLoss is closer to an HR node in perceptual perspective.

3 Center-Oriented Optimization Problem

Center-oriented optimization (COO) problem is: when dealing with several hard-to-discriminate LR inputs, POMs tend to map an LR input to the distribution center of all potential HR, resulting in generating over-smoothed outputs, shown in Fig. 2. Therefore, we first describe in mathematics that how the COO problem influences the quality of POMs. Therefore, a function must be established to describe the influences of the COO problem.

3.1 Problem Description

POMs usually pre-define the super-resolution as an LR input mapping to an HR image through a function , so as to make be as similar as possible to the ground truth . Thus, we have:

(1)

Then, to minimize the distance between an SR image and its corresponding ground truth , the function updates its parameter by using:

(2)

However, in the real world, a region from an LR image might be very similar to another LR region that model cannot discriminate; however, ’s corresponding HR are different in details. Then, we have:

(3)
(4)

where denotes the summarized distance of two similar SR patches with their ground truth . Therefore, the model is forced to optimize the distance, so that SR must be close to the center of HRs :

(5)
(6)

We then extend Eq. 6 to the entire data set:

(7)

where is the mapping probability from LR to HR . For example, if LRs are not similar to each other, thus, an example mapping probability is [0.97, 0.02, 0.01]; however, if LRs are similar to each other, their probability is [0.4, 0.4, 0.2]. Unfortunately, it is difficult to calculate these probabilities.

Note that, the image down-sampling loses the high-frequency information, i.e., HRs usually look the same at the low-frequency regions but their high-frequency details are different. Therefore, the details of generated images are blurry; however, these high-frequency details are very important to the perceptual quality.

Figure 3: The overview of DECLoss. (a) We first apply the high-frequency enhancement to eliminate the Low-Frequency Tendency. Within a training batch, we then reshape the enhanced image into a sequence of flattened mini-patches. Eventually, we cluster the mini-patches to polarize the details, which reduce the influence of the COO problem. (b) The details of the high frequency enhancement process via Eq. 12-Eq. 16. (c) The relevant notations.

3.2 Description Function

Due to the difficulties in calculating the probabilities defined in Eq. 7, we analyze how to measure the intensity of the COO problem (ICOO) by using feature distances. Therefore, we first assume that each LR image is well mapped to its corresponding HR image by the function . The distance between each SR with corresponding HR is approximately zero and the distance from other HR are positive; thus we have:

(8)
(9)

where denotes the mapping accuracy of SR ; denotes the length of a dataset, or, can also denote the top- similar HR images of SR . It is possible to relax the condition of Eq. 9, so that SR can reach any potential HR , even if the HR is not the ground truth . Since the clarity of image detail is the key to human perception of image quality [human_perceptual], the mapping accuracy of image details is usually not essential. Thus, Eq. 9 is rewritten as:

(10)

where denotes the HR image with the shortest distance from . Then, we can describe the intensity of the COO problem through the sum of the score defined in Eq. 10 with multiple generated images :

(11)

where denotes the length of a proper subset of a dataset; and we use the logarithm function to avoid too small values. Eq. 11 is the core function to reflect the intensity of the COO problem (ICOO). Note that, ICOO measures similarities between SR and HR distributions, rather than to measure two image’s similarity. Such distribution similarities represent the overall SR image quality. The ICOO measurement is described in Sec. 5.2.

4 Method

Detail enhanced contrastive loss (DECLoss) is a novel loss function of SR, which aims to alleviate the COO problem and eliminate the Low-Frequency Tendency, so as to increase the perceptual quality of SR images.

4.1 Overview

DECLoss is regarded as a perceptual-driven loss function for image SR. DECLoss does not require any generate-adversarial processes, which are not efficient and effective. Fig. 3 shows that DECLoss consists of two steps – the high-frequency enhancement and spatial contrastive loss, optimized for the Low-Frequency Tendency and COO problem, respectively. First, We enhance details of SR images and their HR images , paying more attention to the high-frequency regions, which are more fit to the human perception [human_perceptual]. Second, we reshape the both SR images and HR images into a sequence of mini-patches. DECLoss with a smaller patch size is more sensitive in feature similarities, so that it can improves the clustering results. Finally, we introduce the “contrastive learning” to cluster mini-patches according to their corresponding HR similarities. Note that, reducing the distance of similar SR images solves the limitation that POMs can only map LR to their ground truth. In addition, enlarging the distance of different groups also increases the difficulty in mapping an LR to the distribution center of similar HRs (Eq. 6).

4.2 High-Frequency Enhancement

In order to increase high-frequency details of POMs, we enhance the image’s high-frequency in Fourier space. As illustrated in Fig. 3

(b), we first use the Discrete Fourier Transform (DFT)

to map a SR image and its ground truth (denoted as ), to the Fourier domain :

(12)

where and are the height and width of an image , respectively; and each component of is defined as:

(13)

We then multiply the inverse Gaussian kernel vector

with the Fourier matrix to obtain:

(14)

where each component of is defined as:

(15)

where and are control variables. Finally, we use the Inverse Discrete Fourier Transform (IDFT) to map the Fourier matrix back to the image domain :

(16)

where is defined as:

(17)

where real and imag are the real part and imaginary part of a complex number, respectively. Note that, the inverse Gaussian kernel provides a smooth importance for regions in different frequencies, suppresses the low-frequencies, and enhances the high-frequencies, so as to alleviate the Low-Frequency Tendency.

4.3 Spatial Contrastive Loss

The spatial contrastive loss is regarded as the key method alleviating the COO problem. Based on similarities of HRs, we control the mapping range of COO problem by using contrastive clustering. Fig. 3(a) illustrates that we first reshape the high-frequency enhanced (Sec. 4.2) input patches into a sequence of flattened 2D mini-patches , where is the resolution of the original patch, is the number of channels, is the batch size, is the resolution of each mini-patch, and is the resulting number of mini-patches. Similarly, a mini-patch is more polarized than a normal patch, which is more beneficial for subsequent clustering operations.

We initially measure the cosine similarities of each SR-to-HR

, and SR-to-SR :

(18)
(19)

where is the number of a batch of HR mini-patches. To better discriminate the positive samples and negative samples, we regard the PSNR similarities of HR-to-HR as a mask :

(20)

where denotes high-frequency enhanced (Sec. 4.2), denotes mini-patch, is the -norm, and MAX is the upper bound of color space. In particular, if the similarity is greater than a threshold , the input is regarded as a positive sample; otherwise, it is a negative sample. Then, the scores of the positive samples and negative ones are represented as:

(21)
(22)

where are temperatures. Note that is equivalent to the original contrastive learning matrix [simclr]. DECLoss is defined as:

(23)
Config. Model L1 VGG [perceptual_loss] DECLoss GAN [srresnet] RaGAN [esrgan] PSNR LPIPS ICOO GPU Hs.
1 EDSR 28.56 0.273 28.54 6.33
2 EDSR 27.22 0.179 28.23 6.61
3 EDSR 26.44 0.166 26.60 8.94
4 EDSR 26.34 0.160 26.42 33.56
5 EDSR 26.25 0.157 26.15 34.22
6 EDSR 26.71 0.160 26.73 9.50
7 EDSR 26.36 0.153 25.90 33.89
8 RRDB 28.47 0.118 26.75 43.22
9 RRDB 26.64 0.092 25.58 67.72
10 RRDB 27.01 0.090 25.83 71.08
Table 1: Ablation study results. We compare different configurations of loss functions. Red font denotes the best and blue denotes the second best. We calculate restoration metrics PSNR, perceptual metric LPIPS [lpips] and our proposed COO strength metric Eq. 11. Config., denotes different loss configurations, ICOO is the our proposed metric (Eq. 11) to measure the intensity of the COO problem, and GPU Hs., represents the GPU hours required for a complete training. RaGAN is the GAN metric of ESRGAN [esrgan]. All metrics are calculated on the DIV2K [div2k] validation set.
Figure 4: Comparison of different configurations of losses. The image is the 0880.png of DIV2K [div2k]. The models correspond to the table above. Our proposed DECLoss has higher perceptual quality and richer realistic details.

4.4 Loss Function

Inspired by the Perceptual Loss [perceptual_loss], we apply the and perceptual loss in our model. loss is the 1-norm distance between a generated image and its ground truth, which ensures the image reconstruction quality. Thus, loss is defined as:

(24)

The perceptual loss measures the distance in the feature space. We specify a pre-trained VGG-19 [vgg] to generate the feature map, denoted as :

(25)

The total loss function is defined as:

(26)

where , , and are the weights to balance different loss terms.

5 Experimentation

5.1 Settings

We train all the models on DIV2K [div2k]

dataset that consists of 2,000 resolution 800 training images, 100 validation images, and 100 test images. To obtain training LR data, we down-sample the HR images using the bicubic interpolation. We evaluate all models on famous SR benchmarks: DIV2K(test)

[div2k], BSD100 [bsd100], and Urban100 [urban100]. We mainly produce experiments on EDSR and RRDB models. By following [edsr], all experiments are performed with a scaling factor of between LR and HR images. Then, we crop patches with size and size from LR and HR, respectively. To ensure the fairness of comparisons, the batch size of all models is set to 32. There are 1,000 batches in each training round.

We divide the training process into two stages. First, we pre-train all the models with loss (Eq. 24

) as an initialization, with 200 epochs, and 5 warm-up epochs. We use a cosine learning rate with

and its initial learning rate is set to . This pre-training with loss helps the model converge to a certain range. Second, the generator is trained using the loss function defined in Eq. 26 with . The temperature is set to . Due to the trade-off between number of mini-patches and computation resource, the size of mini-patch is . The learning rate is set to and cosine learning rate is applied to a decay schedule. For the optimization, we use the Adam algorithm [adam] with and

. We implement our models with PyTorch and train on 4

A100 GPUs.

Figure 5: Relationships between ICOO and LPIPS scores on Urban100[urban100].
Figure 6: Comparisons of different temperatures. The blue curve represents the LPIPS variation of , and red curve represents .

5.2 Intensity of COO Problem

We proposed a function to measure the intensity of the COO problem (Eq. 11), termed ICOO. ICOO calculates distribution similarities between SRs and HRs, rather than measuring two image’s similarities. The mini-patch size is , 8 and 100; these parameters indicate that eight SR mini-patches are randomly cropped for each SR, and 100 HR mini-patches are cropped for each HR. We test ten rounds to average the results, so as to reduce influences of the randomness. Fig. 5 shows that we randomly selected and trained 20 models on Urban100 [urban100], with various architectures and initialization. The Spielman Relevance score of ICOO with LPIPS is 0.892, demonstrating a high positive correlation between ICOO and LPIPS. We also added our ICOO metrics in the comparison.

5.3 Ablation

Comparison of different loss configurations. We conducted an ablation study with different loss configurations to exhibit the effectiveness of our DECLoss. In Table 1 and Fig. 4, Config. 1 indicates results of the basic EDSR; Config. 2 represents results of the DECLoss with restoration loss ; Config. 3 represents results of the perceptual loss; Configs. 4, 5, and 9 are results of the GAN metric Losses; Configs. 6 and 8 present results of our DECLoss trained with perceptual loss [esrgan]. In Configs. 2 and 6, the results of our DECLoss outperformed those of Configs. 1 and 3, respectively. Config. 6 also has the equivalent performance with those of the GAN-based metrics: Configs. 4 and 6. Also, results of Config. 8 achieve similar scores with those of Config. 9. We also conduct the orthogonality of DECLoss that train with RaGAN [esrgan] in Configs. 7 and 10: the results in LPIPS of Configs. 7 and 10 are lower than 0.004 and 0.002 of Configs. 5 and 9, respectively.

Influence of contrastive temperatures. Following original contrastive learning [simclr], we introduced two temperature values to balance the intensity of both positive and negative samples. As shown in Fig. 6, we tested the performance of [0.5, 1.0, 1.5, 2.0, 4.0, 8.0] with a fixed . We then test the performance of [0.5, 1.0, 1.5, 2.0, 4.0, 8.0] with a fixed . Since the SR’s contrastive learning are relatively simple, larger temperatures can increase the learning efficiency. The results demonstrate that: within a certain range, the values of and have no significant influence on the experimental results.

Size 1*1 2*2 3*3 4*4 8*8
PSNR 27.30 26.93 26.78 26.65 26.48
LPIPS 0.1722 0.1648 0.1643 0.1635 0.1596
ICOO 27.43 27.21 27.14 26.82 26.65
Table 2: Comparison of different split size of mini-patches. All metrics are calculated on the DIV2K [div2k] validation set.

Influence of different patch sizes. The patch size is also an important trade-off between the model performance and resource consumption; since the space complexity is . As illustrated in Table 2, we evaluated on EDSR [edsr] for [1, 2, 3, 4, 8] on DIV2K [div2k]. To avoid the usage out of our GPU’s memory, we set the batch size to 16 especially. It is interesting that, within a certain range, the larger the patch size is, the better the perceptual quality. This phenomenon indicates that: the contrastive learning allows for more accurate mapping of similar HR details with smaller mini-patches, so as to achieve better performance on LPIPS.

Benchmark Metric Bicubic EDSR [edsr] SRFlow [srflow] RankSRGAN [ranksrgan] DECLoss ESRGAN [esrgan] DECLoss+
DIV2K PSNR 24.11 28.97 27.06 26.55 28.47 26.64 27.01
LPIPS 0.210 0.138 0.103 0.099 0.118 0.092 0.090
ICOO 28.94 27.23 26.26 25.97 26.75 25.58 25.83
Urban100 PSNR 21.70 24.51 23.69 22.98 24.70 22.78 23.27
LPIPS 0.237 0.140 0.116 0.122 0.113 0.108 0.105
ICOO 27.34 24.67 23.90 23.90 23.30 22.58 22.83
BSD100 PSNR 24.65 26.24 24.67 24.13 25.54 23.97 24.24
LPIPS 0.198 0.158 0.115 0.114 0.138 0.108 0.106
ICOO 28.66 26.00 23.90 24.15 23.87 23.58 23.59
GPU Hours - 6.333 913.2 104.1 43.22 67.72 71.08
Table 3: Comparisons with state-of-the-art methods. Red font denotes the best, and blue denotes the second best. DECLoss and DECLoss+ are trained on RRDB [esrgan]. Specifically, DECLoss+ is a jointly training strategy that trains RRDB with DECLoss and RaGAN [esrgan].
Figure 7: Comparisons with state-of-the-art methods. We compare POMs with GAN-based methods (RankSRGAN [ranksrgan], ESRGAN [esrgan] and ours). It is clearly that GAN-based methods outperform POMs in the perceptual quality. Our proposed DECLoss can generate more realistic textures among different categories.

5.4 Comparisons with State-of-the-art Methods

In addition to conducting the effectiveness of our DECLoss, we compared state-of-the-art methods. Note that, it is not necessary for our DECLoss to outperform the GAN-based method, our DECLoss required less GPU training time to achieve similar accuracy. Also in Table 3, our DECLoss, trained on RRDB, outperforms POMs. Compared with GAN-based methods: RankSRGAN [ranksrgan] and ESRGAN [esrgan], our DECLoss performed equivalently to RankSRGAN, with 2.4 faster, and even DECLoss is 0.005 higher in LPIPS than ESRGAN on Urban100, as well as DECLoss achieved 1.57 faster. We also show the orthogonality of our DECLoss, we apply RaGAN [esrgan] with DECLoss on RRDB, denoted as DECLoss+, which outperformed a varieties of state-of-the-art methods. We illustrate these results in Fig. 7. Compared with a detail-enhanced SR model: SRFlow [srflow], DECLoss+ achieves 0.011 lower in LPIPS on Urban100, with 12.85 faster.

6 Conclusion

In this work, we discovered two factors hindering the perceptual quality of PSNR-oriented models: the center-oriented optimization problem and low-frequency tendency. To better alleviate these problems, we proposed the Detail Enhanced Contrastive Loss (DECLoss), consisting of the high-frequency enhanced module and spatial contrastive learning loss. Without any generate-adversarial processes, our DECLoss achieved equivalent performance to GAN-based methods, but our method was faster. The Experimental results proved our method’s efficiency. It is also interesting to identify that (Table 2), the smaller patch size of the model can lead to the better the perceptual quality. However, due to the hardware memory constraints, the patch size cannot be reduced infinitely; thus, how to find this trade-off will be the key to our important future work.

References